* [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver @ 2015-01-30 5:07 Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 01/18] fm10k: add base driver Chen Jing D(Mark) ` (18 more replies) 0 siblings, 19 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel Red Rock Canyon silicon, which integrates NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Jeff Shaw (18): fm10k: add base driver Change config/ files to add macros for fm10k fm10k: Add empty fm10k files fm10k: add fm10k device id fm10k: Add code to register fm10k pmd PF driver fm10k: add reta update/requery functions fm10k: add rx_queue_setup/release function fm10k: add tx_queue_setup/release function fm10k: add RX/TX single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: Add scatter receive function fm10k: add function to set vlan fm10k: Add SRIOV-VF support fm10k: add PF and VF interrupt handling function Change lib/Makefile to add fm10k driver into compile list. Change mk/rte.app.mk to add fm10k lib into link config/common_bsdapp | 9 + config/common_linuxapp | 9 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 96 + lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ lib/librte_pmd_fm10k/SHARED/fm10k_api.h | 60 + lib/librte_pmd_fm10k/SHARED/fm10k_common.c | 573 ++++++ lib/librte_pmd_fm10k/SHARED/fm10k_common.h | 52 + lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c | 2186 +++++++++++++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h | 116 ++ lib/librte_pmd_fm10k/SHARED/fm10k_pf.c | 1877 +++++++++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_pf.h | 152 ++ lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/SHARED/fm10k_type.h | 925 ++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_vf.c | 586 ++++++ lib/librte_pmd_fm10k/SHARED/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 293 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1846 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 66 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ mk/rte.app.mk | 4 + 24 files changed, 11160 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 01/18] fm10k: add base driver 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 02/18] Change config/ files to add macros for fm10k Chen Jing D(Mark) ` (17 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developped and maintained by Intel ND team, includes basic functional service to Intel Red Rock Canyon silicon. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 +++++ lib/librte_pmd_fm10k/SHARED/fm10k_api.h | 60 + lib/librte_pmd_fm10k/SHARED/fm10k_common.c | 573 ++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_common.h | 52 + lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c | 2186 ++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h | 116 ++ lib/librte_pmd_fm10k/SHARED/fm10k_pf.c | 1877 ++++++++++++++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_pf.h | 152 ++ lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c | 914 ++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/SHARED/fm10k_type.h | 925 ++++++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_vf.c | 586 ++++++++ lib/librte_pmd_fm10k/SHARED/fm10k_vf.h | 91 ++ 14 files changed, 8387 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_api.c b/lib/librte_pmd_fm10k/SHARED/fm10k_api.c new file mode 100644 index 0000000..56f2914 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_api.c @@ -0,0 +1,327 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_api.h b/lib/librte_pmd_fm10k/SHARED/fm10k_api.h new file mode 100644 index 0000000..4c90352 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_api.h @@ -0,0 +1,60 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_common.c b/lib/librte_pmd_fm10k/SHARED/fm10k_common.c new file mode 100644 index 0000000..fee904f --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_common.c @@ -0,0 +1,573 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring*/ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ + +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_common.h b/lib/librte_pmd_fm10k/SHARED/fm10k_common.h new file mode 100644 index 0000000..6e89051 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c b/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c new file mode 100644 index 0000000..dc579a4 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c @@ -0,0 +1,2186 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indicies into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indicies into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a seciton of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a seciton of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowlegment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed trhead the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recepient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to sent interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number os add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initalize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initalize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h b/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h new file mode 100644 index 0000000..e54cba4 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h b/lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h new file mode 100644 index 0000000..06b0cdc --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h @@ -0,0 +1,116 @@ +/******************************************************************************* + +Copyright (c) 2001-2012, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_LOG(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = (uint32_t)(val)) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) (FM10K_PCI_REG((reg)) = (value)) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_pf.c b/lib/librte_pmd_fm10k/SHARED/fm10k_pf.c new file mode 100644 index 0000000..87ed7d4 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_pf.c @@ -0,0 +1,1877 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || + vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresss + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* drop upper 4 bits of VLAN ID */ + vid = (vid << 4) >> 4; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresss + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector*/ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count*/ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count*/ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first*/ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message form the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message form the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interraction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_pf.h b/lib/librte_pmd_fm10k/SHARED/fm10k_pf.h new file mode 100644 index 0000000..2b4605a --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_pf.h @@ -0,0 +1,152 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c b/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c new file mode 100644 index 0000000..a192a2a --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the mesage + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h b/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h new file mode 100644 index 0000000..1b7c4d4 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_type.h b/lib/librte_pmd_fm10k/SHARED/fm10k_type.h new file mode 100644 index 0000000..e7bf28d --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_type.h @@ -0,0 +1,925 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) + +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) + +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI idenfifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_vf.c b/lib/librte_pmd_fm10k/SHARED/fm10k_vf.c new file mode 100644 index 0000000..640b08b --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_vf.c @@ -0,0 +1,586 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permenant address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardare to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast address + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set of clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast address + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set of clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/SHARED/fm10k_vf.h b/lib/librte_pmd_fm10k/SHARED/fm10k_vf.h new file mode 100644 index 0000000..e0338c4 --- /dev/null +++ b/lib/librte_pmd_fm10k/SHARED/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver 2015-01-30 5:07 ` [dpdk-dev] [PATCH 01/18] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver Chen Jing D(Mark) ` (14 more replies) 0 siblings, 15 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel Red Rock Canyon silicon, which integrates NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Changes in v2: - Merge 3 patches into 1 to configure fm10k compile environment. - Rework on log code to follow style in ixgbe. - Rework log message, remove redundant '\n' - Update Copyright year from "2014" to "2015" - Change base driver directory name from SHARED to base - Add more description in log for patch "add PF and VF interrupt" - Merge 2 patches into 1 to register fm10k driver - Define macro to replace numeric for lower 32-bit mask. Jeff Shaw (15): fm10k: add base driver fm10k: add fm10k device id fm10k: register fm10k pmd PF driver Change config files to add fm10k into compile fm10k: add reta update/requery functions fm10k: add rx_queue_setup/release function fm10k: add tx_queue_setup/release function fm10k: add RX/TX single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: Add scatter receive function fm10k: add function to set vlan fm10k: Add SRIOV-VF support fm10k: add PF and VF interrupt handling function config/common_bsdapp | 11 + config/common_linuxapp | 11 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 96 + lib/librte_pmd_fm10k/base/fm10k_api.c | 327 ++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 60 + lib/librte_pmd_fm10k/base/fm10k_common.c | 573 ++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2186 +++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 118 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1877 +++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 152 ++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/base/fm10k_type.h | 925 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 586 ++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 293 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ mk/rte.app.mk | 4 + 24 files changed, 11200 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 02/15] fm10k: add fm10k device id Chen Jing D(Mark) ` (13 subsequent siblings) 14 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developped and maintained by Intel ND team, includes basic functional service to Intel Red Rock Canyon silicon. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/base/fm10k_api.c | 327 +++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 60 + lib/librte_pmd_fm10k/base/fm10k_common.c | 573 ++++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2186 ++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 118 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1877 +++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 152 +++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/base/fm10k_type.h | 925 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 586 ++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 ++ 14 files changed, 8389 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.c b/lib/librte_pmd_fm10k/base/fm10k_api.c new file mode 100644 index 0000000..56f2914 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.c @@ -0,0 +1,327 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.h b/lib/librte_pmd_fm10k/base/fm10k_api.h new file mode 100644 index 0000000..4c90352 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.h @@ -0,0 +1,60 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.c b/lib/librte_pmd_fm10k/base/fm10k_common.c new file mode 100644 index 0000000..fee904f --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.c @@ -0,0 +1,573 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring*/ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ + +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.h b/lib/librte_pmd_fm10k/base/fm10k_common.h new file mode 100644 index 0000000..6e89051 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.c b/lib/librte_pmd_fm10k/base/fm10k_mbx.c new file mode 100644 index 0000000..dc579a4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.c @@ -0,0 +1,2186 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indicies into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indicies into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a seciton of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a seciton of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowlegment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed trhead the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recepient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to sent interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number os add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initalize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initalize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.h b/lib/librte_pmd_fm10k/base/fm10k_mbx.h new file mode 100644 index 0000000..e54cba4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_osdep.h b/lib/librte_pmd_fm10k/base/fm10k_osdep.h new file mode 100644 index 0000000..96a6875 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_osdep.h @@ -0,0 +1,118 @@ +/******************************************************************************* + +Copyright (c) 2013-2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_DRV_LOG_RAW(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = ((uint32_t)(val))) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) do { \ + FM10K_PCI_REG((reg)) = (value); \ +} while (0) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.c b/lib/librte_pmd_fm10k/base/fm10k_pf.c new file mode 100644 index 0000000..87ed7d4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.c @@ -0,0 +1,1877 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || + vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresss + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* drop upper 4 bits of VLAN ID */ + vid = (vid << 4) >> 4; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresss + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector*/ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count*/ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count*/ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first*/ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message form the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message form the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interraction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.h b/lib/librte_pmd_fm10k/base/fm10k_pf.h new file mode 100644 index 0000000..2b4605a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.h @@ -0,0 +1,152 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.c b/lib/librte_pmd_fm10k/base/fm10k_tlv.c new file mode 100644 index 0000000..a192a2a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the mesage + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.h b/lib/librte_pmd_fm10k/base/fm10k_tlv.h new file mode 100644 index 0000000..1b7c4d4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_type.h b/lib/librte_pmd_fm10k/base/fm10k_type.h new file mode 100644 index 0000000..e7bf28d --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_type.h @@ -0,0 +1,925 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) + +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) + +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI idenfifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.c b/lib/librte_pmd_fm10k/base/fm10k_vf.c new file mode 100644 index 0000000..640b08b --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.c @@ -0,0 +1,586 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permenant address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardare to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast address + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set of clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast address + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set of clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.h b/lib/librte_pmd_fm10k/base/fm10k_vf.h new file mode 100644 index 0000000..e0338c4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver Chen Jing D(Mark) ` (14 more replies) 0 siblings, 15 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel Red Rock Canyon silicon, which integrates NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Changes in v3: - Update base driver. - Define several macros to pass base driver compile. Changes in v2: - Merge 3 patches into 1 to configure fm10k compile environment. - Rework on log code to follow style in ixgbe. - Rework log message, remove redundant '\n' - Update Copyright year from "2014" to "2015" - Change base driver directory name from SHARED to base - Add more description in log for patch "add PF and VF interrupt" - Merge 2 patches into 1 to register fm10k driver - Define macro to replace numeric for lower 32-bit mask. Jeff Shaw (15): fm10k: add base driver eal: add fm10k device id fm10k: register fm10k pmd PF driver Change config files to add fm10k into compile fm10k: add reta update/requery functions fm10k: add rx_queue_setup/release function fm10k: add tx_queue_setup/release function fm10k: add RX/TX single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: Add scatter receive function fm10k: add function to set vlan fm10k: Add SRIOV-VF support fm10k: add PF and VF interrupt handling function config/common_bsdapp | 11 + config/common_linuxapp | 11 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 96 + lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 293 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ mk/rte.app.mk | 4 + 24 files changed, 11428 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 02/15] eal: add fm10k device id Chen Jing D(Mark) ` (13 subsequent siblings) 14 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developped and maintained by Intel ND team, includes basic functional service to Intel Red Rock Canyon silicon. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/base/fm10k_api.c | 341 +++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 ++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 +++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 ++ 14 files changed, 8617 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.c b/lib/librte_pmd_fm10k/base/fm10k_api.c new file mode 100644 index 0000000..c0f555c --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.c @@ -0,0 +1,341 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_adjust_systime - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function is meant to update the frequency of the clock represented + * by the SYSTIME register. + **/ +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb) +{ + return fm10k_call_func(hw, hw->mac.ops.adjust_systime, + (hw, ppb), FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.h b/lib/librte_pmd_fm10k/base/fm10k_api.h new file mode 100644 index 0000000..343d750 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.h @@ -0,0 +1,61 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.c b/lib/librte_pmd_fm10k/base/fm10k_common.c new file mode 100644 index 0000000..a90d2f0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.c @@ -0,0 +1,572 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring */ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.h b/lib/librte_pmd_fm10k/base/fm10k_common.h new file mode 100644 index 0000000..45fbbc0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.c b/lib/librte_pmd_fm10k/base/fm10k_mbx.c new file mode 100644 index 0000000..2081414 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.c @@ -0,0 +1,2185 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indices into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indices into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a section of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a section of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowledgment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed treat the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recipient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to send interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head; + s32 err; + + /* we will need to pull the header field for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number so add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.h b/lib/librte_pmd_fm10k/base/fm10k_mbx.h new file mode 100644 index 0000000..6332584 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_osdep.h b/lib/librte_pmd_fm10k/base/fm10k_osdep.h new file mode 100644 index 0000000..04f8fe9 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_osdep.h @@ -0,0 +1,148 @@ +/******************************************************************************* + +Copyright (c) 2013-2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_DRV_LOG_RAW(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = ((uint32_t)(val))) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) do { \ + FM10K_PCI_REG((reg)) = (value); \ +} while (0) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + +#ifndef FM10K_IS_ZERO_ETHER_ADDR +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) +#endif + +#ifndef FM10K_IS_MULTICAST_ETHER_ADDR +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) +#endif + +#ifndef FM10K_IS_VALID_ETHER_ADDR +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) +#endif + +#ifndef do_div +#define do_div(n, base) ({\ + (n) = (n) / (base);\ +}) +#endif /* do_div */ + +/* DPDK can't access IOMEM directly */ +#ifndef FM10K_WRITE_SW_REG +#define FM10K_WRITE_SW_REG(v1, v2, v3) do { } while (0) +#endif + +#ifndef fm10k_read_reg +#define fm10k_read_reg FM10K_READ_REG +#endif + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.c b/lib/librte_pmd_fm10k/base/fm10k_pf.c new file mode 100644 index 0000000..3545a24 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.c @@ -0,0 +1,1992 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_xc_addr_pf - Update device addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort or VLAN are not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort) || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first */ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interaction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_request_tx_timestamp_mode_pf - Request a specific Tx timestamping mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating the requested mode + * + * This function will attempt to request a specific timestamp mode for the + * port so that it can receive Tx timestamp messages. + **/ +STATIC s32 fm10k_request_tx_timestamp_mode_pf(struct fm10k_hw *hw, + u16 glort, + u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], timestamp_mode; + + DEBUGFUNC("fm10k_request_timestamp_mode_pf"); + + if (mode > FM10K_TIMESTAMP_MODE_PEP_TO_ANY) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write timestamp mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + timestamp_mode = ((u32)mode << 16) | glort; + + /* generate message requesting change to xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ, timestamp_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_adjust_systime_pf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function will adjust the SYSTIME_CFG register contained in BAR 4 + * if this function is supported for BAR 4 access. The adjustment amount + * is based on the parts per billion value provided and adjusted to a + * value based on parts per 2^48 clock cycles. + * + * If adjustment is not supported or the requested value is too large + * we will return an error. + **/ +STATIC s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb) +{ + u64 systime_adjust; + + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* if sw_addr is not set we don't have switch register access */ + if (!hw->sw_addr) + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; + + /* we must convert the value from parts per billion to parts per + * 2^48 cycles. In addition I have opted to only use the 30 most + * significant bits of the adjustment value as the 8 least + * significant bits are located in another register and represent + * a value significantly less than a part per billion, the result + * of dropping the 8 least significant bits is that the adjustment + * value is effectively multiplied by 2^8 when we write it. + * + * As a result of all this the math for this breaks down as follows: + * ppb / 10^9 == adjust * 2^8 / 2^48 + * If we solve this for adjust, and simplify it comes out as: + * ppb * 2^31 / 5^9 == adjust + */ + systime_adjust = (ppb < 0) ? -ppb : ppb; + systime_adjust <<= 31; + do_div(systime_adjust, 1953125); + + /* verify the requested adjustment value is in range */ + if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK) + return FM10K_ERR_PARAM; + + if (ppb < 0) + systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE; + + FM10K_WRITE_SW_REG(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_pf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanosecods. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_pf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_SYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + mac->ops.adjust_systime = &fm10k_adjust_systime_pf; + mac->ops.read_systime = &fm10k_read_systime_pf; + mac->ops.request_tx_timestamp_mode = &fm10k_request_tx_timestamp_mode_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.h b/lib/librte_pmd_fm10k/base/fm10k_pf.h new file mode 100644 index 0000000..f6c290a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.h @@ -0,0 +1,155 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, + FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE = 0x702, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ = 0x11, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_RESP = 0x12, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.c b/lib/librte_pmd_fm10k/base/fm10k_tlv.c new file mode 100644 index 0000000..1d9d7d8 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the message + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.h b/lib/librte_pmd_fm10k/base/fm10k_tlv.h new file mode 100644 index 0000000..ad97236 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_type.h b/lib/librte_pmd_fm10k/base/fm10k_type.h new file mode 100644 index 0000000..534fab4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_type.h @@ -0,0 +1,937 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +/* Registers contained in BAR 4 for Switch management */ +#define FM10K_SW_SYSTIME_CFG 0x0224C +#define FM10K_SW_SYSTIME_CFG_STEP_SHIFT 4 +#define FM10K_SW_SYSTIME_CFG_ADJUST_MASK 0xFF000000 +#define FM10K_SW_SYSTIME_ADJUST 0x0224D +#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF +#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000 +#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + + + + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); + s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb); + u64 (*read_systime)(struct fm10k_hw *); + s32 (*request_tx_timestamp_mode)(struct fm10k_hw *, u16, u8); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +enum fm10k_timestamp_modes { + FM10K_TIMESTAMP_MODE_NONE = 0, + FM10K_TIMESTAMP_MODE_PEP_TO_PEP = 1, + FM10K_TIMESTAMP_MODE_PEP_TO_ANY = 2, +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI identifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + u32 *sw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.c b/lib/librte_pmd_fm10k/base/fm10k_vf.c new file mode 100644 index 0000000..2246688 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.c @@ -0,0 +1,641 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permanent address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + /* fetch default VLAN */ + hw->mac.default_vid = (FM10K_READ_REG(hw, FM10K_TXQCTL(0)) & + FM10K_TXQCTL_VID_MASK) >> FM10K_TXQCTL_VID_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_adjust_systime_vf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function takes an adjustment rate in parts per billion and will + * verify that this value is 0 as the VF cannot support adjusting the + * systime clock. + * + * If the ppb value is non-zero the return is ERR_PARAM else success + **/ +STATIC s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* The VF cannot adjust the clock frequency, however it should + * already have a syntonic clock with whichever host interface is + * running as the master for the host interface clock domain so + * there should be not frequency adjustment necessary. + */ + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_vf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanoseconds. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_vf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + mac->ops.adjust_systime = &fm10k_adjust_systime_vf; + mac->ops.read_systime = &fm10k_read_systime_vf, + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.h b/lib/librte_pmd_fm10k/base/fm10k_vf.h new file mode 100644 index 0000000..0438542 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver Chen Jing D(Mark) ` (15 more replies) 0 siblings, 16 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel fm10k series of silicons, which integrate NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Change in v4: - Change commit log to remove improper words. Changes in v3: - Update base driver. - Define several macros to pass base driver compile. Changes in v2: - Merge 3 patches into 1 to configure fm10k compile environment. - Rework on log code to follow style in ixgbe. - Rework log message, remove redundant '\n' - Update Copyright year from "2014" to "2015" - Change base driver directory name from SHARED to base - Add more description in log for patch "add PF and VF interrupt" - Merge 2 patches into 1 to register fm10k driver - Define macro to replace numeric for lower 32-bit mask. Jeff Shaw (15): fm10k: add base driver eal: add fm10k device id fm10k: register fm10k pmd PF driver Change config files to add fm10k into compile fm10k: add reta update/requery functions fm10k: add rx_queue_setup/release function fm10k: add tx_queue_setup/release function fm10k: add RX/TX single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: Add scatter receive function fm10k: add function to set vlan fm10k: Add SRIOV-VF support fm10k: add PF and VF interrupt handling function config/common_bsdapp | 11 + config/common_linuxapp | 11 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 96 + lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 293 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ mk/rte.app.mk | 4 + 24 files changed, 11428 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 02/15] eal: add fm10k device id Chen Jing D(Mark) ` (14 subsequent siblings) 15 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developped and maintained by Intel ND team, includes basic functional service to Intel fm10k series of silicons. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/base/fm10k_api.c | 341 +++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 ++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 +++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 ++ 14 files changed, 8617 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.c b/lib/librte_pmd_fm10k/base/fm10k_api.c new file mode 100644 index 0000000..c0f555c --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.c @@ -0,0 +1,341 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_adjust_systime - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function is meant to update the frequency of the clock represented + * by the SYSTIME register. + **/ +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb) +{ + return fm10k_call_func(hw, hw->mac.ops.adjust_systime, + (hw, ppb), FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.h b/lib/librte_pmd_fm10k/base/fm10k_api.h new file mode 100644 index 0000000..343d750 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.h @@ -0,0 +1,61 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.c b/lib/librte_pmd_fm10k/base/fm10k_common.c new file mode 100644 index 0000000..a90d2f0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.c @@ -0,0 +1,572 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring */ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.h b/lib/librte_pmd_fm10k/base/fm10k_common.h new file mode 100644 index 0000000..45fbbc0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.c b/lib/librte_pmd_fm10k/base/fm10k_mbx.c new file mode 100644 index 0000000..2081414 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.c @@ -0,0 +1,2185 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indices into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indices into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a section of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a section of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowledgment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed treat the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recipient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to send interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head; + s32 err; + + /* we will need to pull the header field for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number so add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.h b/lib/librte_pmd_fm10k/base/fm10k_mbx.h new file mode 100644 index 0000000..6332584 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_osdep.h b/lib/librte_pmd_fm10k/base/fm10k_osdep.h new file mode 100644 index 0000000..04f8fe9 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_osdep.h @@ -0,0 +1,148 @@ +/******************************************************************************* + +Copyright (c) 2013-2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_DRV_LOG_RAW(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = ((uint32_t)(val))) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) do { \ + FM10K_PCI_REG((reg)) = (value); \ +} while (0) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + +#ifndef FM10K_IS_ZERO_ETHER_ADDR +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) +#endif + +#ifndef FM10K_IS_MULTICAST_ETHER_ADDR +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) +#endif + +#ifndef FM10K_IS_VALID_ETHER_ADDR +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) +#endif + +#ifndef do_div +#define do_div(n, base) ({\ + (n) = (n) / (base);\ +}) +#endif /* do_div */ + +/* DPDK can't access IOMEM directly */ +#ifndef FM10K_WRITE_SW_REG +#define FM10K_WRITE_SW_REG(v1, v2, v3) do { } while (0) +#endif + +#ifndef fm10k_read_reg +#define fm10k_read_reg FM10K_READ_REG +#endif + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.c b/lib/librte_pmd_fm10k/base/fm10k_pf.c new file mode 100644 index 0000000..3545a24 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.c @@ -0,0 +1,1992 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_xc_addr_pf - Update device addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort or VLAN are not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort) || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first */ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interaction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_request_tx_timestamp_mode_pf - Request a specific Tx timestamping mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating the requested mode + * + * This function will attempt to request a specific timestamp mode for the + * port so that it can receive Tx timestamp messages. + **/ +STATIC s32 fm10k_request_tx_timestamp_mode_pf(struct fm10k_hw *hw, + u16 glort, + u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], timestamp_mode; + + DEBUGFUNC("fm10k_request_timestamp_mode_pf"); + + if (mode > FM10K_TIMESTAMP_MODE_PEP_TO_ANY) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write timestamp mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + timestamp_mode = ((u32)mode << 16) | glort; + + /* generate message requesting change to xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ, timestamp_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_adjust_systime_pf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function will adjust the SYSTIME_CFG register contained in BAR 4 + * if this function is supported for BAR 4 access. The adjustment amount + * is based on the parts per billion value provided and adjusted to a + * value based on parts per 2^48 clock cycles. + * + * If adjustment is not supported or the requested value is too large + * we will return an error. + **/ +STATIC s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb) +{ + u64 systime_adjust; + + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* if sw_addr is not set we don't have switch register access */ + if (!hw->sw_addr) + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; + + /* we must convert the value from parts per billion to parts per + * 2^48 cycles. In addition I have opted to only use the 30 most + * significant bits of the adjustment value as the 8 least + * significant bits are located in another register and represent + * a value significantly less than a part per billion, the result + * of dropping the 8 least significant bits is that the adjustment + * value is effectively multiplied by 2^8 when we write it. + * + * As a result of all this the math for this breaks down as follows: + * ppb / 10^9 == adjust * 2^8 / 2^48 + * If we solve this for adjust, and simplify it comes out as: + * ppb * 2^31 / 5^9 == adjust + */ + systime_adjust = (ppb < 0) ? -ppb : ppb; + systime_adjust <<= 31; + do_div(systime_adjust, 1953125); + + /* verify the requested adjustment value is in range */ + if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK) + return FM10K_ERR_PARAM; + + if (ppb < 0) + systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE; + + FM10K_WRITE_SW_REG(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_pf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanosecods. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_pf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_SYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + mac->ops.adjust_systime = &fm10k_adjust_systime_pf; + mac->ops.read_systime = &fm10k_read_systime_pf; + mac->ops.request_tx_timestamp_mode = &fm10k_request_tx_timestamp_mode_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.h b/lib/librte_pmd_fm10k/base/fm10k_pf.h new file mode 100644 index 0000000..f6c290a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.h @@ -0,0 +1,155 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, + FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE = 0x702, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ = 0x11, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_RESP = 0x12, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.c b/lib/librte_pmd_fm10k/base/fm10k_tlv.c new file mode 100644 index 0000000..1d9d7d8 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the message + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.h b/lib/librte_pmd_fm10k/base/fm10k_tlv.h new file mode 100644 index 0000000..ad97236 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_type.h b/lib/librte_pmd_fm10k/base/fm10k_type.h new file mode 100644 index 0000000..534fab4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_type.h @@ -0,0 +1,937 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +/* Registers contained in BAR 4 for Switch management */ +#define FM10K_SW_SYSTIME_CFG 0x0224C +#define FM10K_SW_SYSTIME_CFG_STEP_SHIFT 4 +#define FM10K_SW_SYSTIME_CFG_ADJUST_MASK 0xFF000000 +#define FM10K_SW_SYSTIME_ADJUST 0x0224D +#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF +#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000 +#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + + + + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); + s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb); + u64 (*read_systime)(struct fm10k_hw *); + s32 (*request_tx_timestamp_mode)(struct fm10k_hw *, u16, u8); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +enum fm10k_timestamp_modes { + FM10K_TIMESTAMP_MODE_NONE = 0, + FM10K_TIMESTAMP_MODE_PEP_TO_PEP = 1, + FM10K_TIMESTAMP_MODE_PEP_TO_ANY = 2, +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI identifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + u32 *sw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.c b/lib/librte_pmd_fm10k/base/fm10k_vf.c new file mode 100644 index 0000000..2246688 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.c @@ -0,0 +1,641 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permanent address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + /* fetch default VLAN */ + hw->mac.default_vid = (FM10K_READ_REG(hw, FM10K_TXQCTL(0)) & + FM10K_TXQCTL_VID_MASK) >> FM10K_TXQCTL_VID_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_adjust_systime_vf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function takes an adjustment rate in parts per billion and will + * verify that this value is 0 as the VF cannot support adjusting the + * systime clock. + * + * If the ppb value is non-zero the return is ERR_PARAM else success + **/ +STATIC s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* The VF cannot adjust the clock frequency, however it should + * already have a syntonic clock with whichever host interface is + * running as the master for the host interface clock domain so + * there should be not frequency adjustment necessary. + */ + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_vf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanoseconds. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_vf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + mac->ops.adjust_systime = &fm10k_adjust_systime_vf; + mac->ops.read_systime = &fm10k_read_systime_vf, + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.h b/lib/librte_pmd_fm10k/base/fm10k_vf.h new file mode 100644 index 0000000..0438542 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver Chen Jing D(Mark) ` (18 more replies) 0 siblings, 19 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel Ethernet Switch FM10000 Series of silicons, which integrate NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Change in v5: - Add sanity check for mbuf allocation. - Add a new patch to claim fm10k driver review - Change commit log. - Add unlikely in func rx_desc_to_ol_flags to gain performance - Add a new patch to add ABI version Change in v4: - Change commit log to remove improper words. Changes in v3: - Update base driver. - Define several macros to pass base driver compile. Changes in v2: - Merge 3 patches into 1 to configure fm10k compile environment. - Rework on log code to follow style in ixgbe. - Rework log message, remove redundant '\n' - Update Copyright year from "2014" to "2015" - Change base driver directory name from SHARED to base - Add more description in log for patch "add PF and VF interrupt" - Merge 2 patches into 1 to register fm10k driver - Define macro to replace numeric for lower 32-bit mask. Chen Jing D(Mark) (1): maintainers: claim for fm10k review Jeff Shaw (15): fm10k: add base driver eal: add fm10k device id fm10k: register fm10k pmd PF driver Change config files to add fm10k into compile fm10k: add reta update/requery functions fm10k: add rx_queue_setup/release function fm10k: add tx_queue_setup/release function fm10k: add RX/TX single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: Add scatter receive function fm10k: add function to set vlan fm10k: Add SRIOV-VF support fm10k: add PF and VF interrupt handling function Michael Qiu (1): fm10k: Add ABI version of librte_pmd_fm10k MAINTAINERS | 4 + config/common_bsdapp | 11 + config/common_linuxapp | 11 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 100 + lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 293 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 459 +++++ lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + mk/rte.app.mk | 4 + 26 files changed, 11472 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c create mode 100644 lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 02/17] eal: add fm10k device id Chen Jing D(Mark) ` (17 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developed and maintained by Intel ND team, includes basic functional service to Intel Ethernet Switch FM10000 Series of silicons. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/base/fm10k_api.c | 341 +++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 ++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 +++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 ++ 14 files changed, 8617 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.c b/lib/librte_pmd_fm10k/base/fm10k_api.c new file mode 100644 index 0000000..c0f555c --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.c @@ -0,0 +1,341 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_adjust_systime - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function is meant to update the frequency of the clock represented + * by the SYSTIME register. + **/ +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb) +{ + return fm10k_call_func(hw, hw->mac.ops.adjust_systime, + (hw, ppb), FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.h b/lib/librte_pmd_fm10k/base/fm10k_api.h new file mode 100644 index 0000000..343d750 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.h @@ -0,0 +1,61 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.c b/lib/librte_pmd_fm10k/base/fm10k_common.c new file mode 100644 index 0000000..a90d2f0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.c @@ -0,0 +1,572 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring */ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.h b/lib/librte_pmd_fm10k/base/fm10k_common.h new file mode 100644 index 0000000..45fbbc0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.c b/lib/librte_pmd_fm10k/base/fm10k_mbx.c new file mode 100644 index 0000000..2081414 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.c @@ -0,0 +1,2185 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indices into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indices into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a section of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a section of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowledgment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed treat the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recipient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to send interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head; + s32 err; + + /* we will need to pull the header field for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number so add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.h b/lib/librte_pmd_fm10k/base/fm10k_mbx.h new file mode 100644 index 0000000..6332584 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_osdep.h b/lib/librte_pmd_fm10k/base/fm10k_osdep.h new file mode 100644 index 0000000..04f8fe9 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_osdep.h @@ -0,0 +1,148 @@ +/******************************************************************************* + +Copyright (c) 2013-2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_DRV_LOG_RAW(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = ((uint32_t)(val))) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) do { \ + FM10K_PCI_REG((reg)) = (value); \ +} while (0) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + +#ifndef FM10K_IS_ZERO_ETHER_ADDR +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) +#endif + +#ifndef FM10K_IS_MULTICAST_ETHER_ADDR +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) +#endif + +#ifndef FM10K_IS_VALID_ETHER_ADDR +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) +#endif + +#ifndef do_div +#define do_div(n, base) ({\ + (n) = (n) / (base);\ +}) +#endif /* do_div */ + +/* DPDK can't access IOMEM directly */ +#ifndef FM10K_WRITE_SW_REG +#define FM10K_WRITE_SW_REG(v1, v2, v3) do { } while (0) +#endif + +#ifndef fm10k_read_reg +#define fm10k_read_reg FM10K_READ_REG +#endif + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.c b/lib/librte_pmd_fm10k/base/fm10k_pf.c new file mode 100644 index 0000000..3545a24 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.c @@ -0,0 +1,1992 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_xc_addr_pf - Update device addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort or VLAN are not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort) || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first */ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interaction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_request_tx_timestamp_mode_pf - Request a specific Tx timestamping mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating the requested mode + * + * This function will attempt to request a specific timestamp mode for the + * port so that it can receive Tx timestamp messages. + **/ +STATIC s32 fm10k_request_tx_timestamp_mode_pf(struct fm10k_hw *hw, + u16 glort, + u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], timestamp_mode; + + DEBUGFUNC("fm10k_request_timestamp_mode_pf"); + + if (mode > FM10K_TIMESTAMP_MODE_PEP_TO_ANY) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write timestamp mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + timestamp_mode = ((u32)mode << 16) | glort; + + /* generate message requesting change to xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ, timestamp_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_adjust_systime_pf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function will adjust the SYSTIME_CFG register contained in BAR 4 + * if this function is supported for BAR 4 access. The adjustment amount + * is based on the parts per billion value provided and adjusted to a + * value based on parts per 2^48 clock cycles. + * + * If adjustment is not supported or the requested value is too large + * we will return an error. + **/ +STATIC s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb) +{ + u64 systime_adjust; + + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* if sw_addr is not set we don't have switch register access */ + if (!hw->sw_addr) + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; + + /* we must convert the value from parts per billion to parts per + * 2^48 cycles. In addition I have opted to only use the 30 most + * significant bits of the adjustment value as the 8 least + * significant bits are located in another register and represent + * a value significantly less than a part per billion, the result + * of dropping the 8 least significant bits is that the adjustment + * value is effectively multiplied by 2^8 when we write it. + * + * As a result of all this the math for this breaks down as follows: + * ppb / 10^9 == adjust * 2^8 / 2^48 + * If we solve this for adjust, and simplify it comes out as: + * ppb * 2^31 / 5^9 == adjust + */ + systime_adjust = (ppb < 0) ? -ppb : ppb; + systime_adjust <<= 31; + do_div(systime_adjust, 1953125); + + /* verify the requested adjustment value is in range */ + if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK) + return FM10K_ERR_PARAM; + + if (ppb < 0) + systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE; + + FM10K_WRITE_SW_REG(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_pf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanosecods. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_pf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_SYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + mac->ops.adjust_systime = &fm10k_adjust_systime_pf; + mac->ops.read_systime = &fm10k_read_systime_pf; + mac->ops.request_tx_timestamp_mode = &fm10k_request_tx_timestamp_mode_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.h b/lib/librte_pmd_fm10k/base/fm10k_pf.h new file mode 100644 index 0000000..f6c290a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.h @@ -0,0 +1,155 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, + FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE = 0x702, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ = 0x11, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_RESP = 0x12, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.c b/lib/librte_pmd_fm10k/base/fm10k_tlv.c new file mode 100644 index 0000000..1d9d7d8 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the message + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.h b/lib/librte_pmd_fm10k/base/fm10k_tlv.h new file mode 100644 index 0000000..ad97236 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_type.h b/lib/librte_pmd_fm10k/base/fm10k_type.h new file mode 100644 index 0000000..534fab4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_type.h @@ -0,0 +1,937 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +/* Registers contained in BAR 4 for Switch management */ +#define FM10K_SW_SYSTIME_CFG 0x0224C +#define FM10K_SW_SYSTIME_CFG_STEP_SHIFT 4 +#define FM10K_SW_SYSTIME_CFG_ADJUST_MASK 0xFF000000 +#define FM10K_SW_SYSTIME_ADJUST 0x0224D +#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF +#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000 +#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + + + + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); + s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb); + u64 (*read_systime)(struct fm10k_hw *); + s32 (*request_tx_timestamp_mode)(struct fm10k_hw *, u16, u8); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +enum fm10k_timestamp_modes { + FM10K_TIMESTAMP_MODE_NONE = 0, + FM10K_TIMESTAMP_MODE_PEP_TO_PEP = 1, + FM10K_TIMESTAMP_MODE_PEP_TO_ANY = 2, +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI identifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + u32 *sw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.c b/lib/librte_pmd_fm10k/base/fm10k_vf.c new file mode 100644 index 0000000..2246688 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.c @@ -0,0 +1,641 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permanent address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + /* fetch default VLAN */ + hw->mac.default_vid = (FM10K_READ_REG(hw, FM10K_TXQCTL(0)) & + FM10K_TXQCTL_VID_MASK) >> FM10K_TXQCTL_VID_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_adjust_systime_vf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function takes an adjustment rate in parts per billion and will + * verify that this value is 0 as the VF cannot support adjusting the + * systime clock. + * + * If the ppb value is non-zero the return is ERR_PARAM else success + **/ +STATIC s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* The VF cannot adjust the clock frequency, however it should + * already have a syntonic clock with whichever host interface is + * running as the master for the host interface clock domain so + * there should be not frequency adjustment necessary. + */ + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_vf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanoseconds. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_vf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + mac->ops.adjust_systime = &fm10k_adjust_systime_vf; + mac->ops.read_systime = &fm10k_read_systime_vf, + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.h b/lib/librte_pmd_fm10k/base/fm10k_vf.h new file mode 100644 index 0000000..0438542 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 01/16] fm10k: add base driver Chen Jing D(Mark) ` (16 more replies) 0 siblings, 17 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> The patch set add poll mode driver for the host interface of Intel Ethernet Switch FM10000 Series of silicons, which integrate NIC and switch functionalities. The patch set include below features: 1. Basic RX/TX functions for PF/VF. 2. Interrupt handling mechanism for PF/VF. 3. per queue start/stop functions for PF/VF. 4. Mailbox handling between PF/VF and PF/Switch Manager. 5. Receive Side Scaling (RSS) for PF/VF. 6. Scatter receive function for PF/VF. 7. reta update/query for PF/VF. 8. VLAN filter set for PF. 9. Link status query for PF/VF. Change in v6: - Merge ABI patch with fm10k driver regsiter patch. - Fix typo. - Rework comments. - Minor adjustment on commit log. - Increase error variable after mbuf allocation failed. Change in v5: - Add sanity check for mbuf allocation. - Add a new patch to claim fm10k driver review - Change commit log. - Add unlikely in func rx_desc_to_ol_flags to gain performance - Add a new patch to add ABI version Change in v4: - Change commit log to remove improper words. Changes in v3: - Update base driver. - Define several macros to pass base driver compile. Changes in v2: - Merge 3 patches into 1 to configure fm10k compile environment. - Rework on log code to follow style in ixgbe. - Rework log message, remove redundant '\n' - Update Copyright year from "2014" to "2015" - Change base driver directory name from SHARED to base - Add more description in log for patch "add PF and VF interrupt" - Merge 2 patches into 1 to register fm10k driver - Define macro to replace numeric for lower 32-bit mask. Chen Jing D(Mark) (1): maintainers: claim for fm10k review Jeff Shaw (15): fm10k: add base driver eal: add fm10k device id fm10k: register fm10k pmd PF driver config: change config files to add fm10k into compile fm10k: add reta update/requery functions fm10k: add Rx queue setup/release function fm10k: add Tx queue setup/release function fm10k: add Rx/Tx single queue start/stop function fm10k: add dev start/stop functions fm10k: add receive and tranmit function fm10k: add PF RSS support fm10k: add scatter receive function fm10k: add function to set vlan fm10k: add SRIOV-VF support fm10k: add PF and VF interrupt handling function MAINTAINERS | 4 + config/common_bsdapp | 11 + config/common_linuxapp | 11 + lib/Makefile | 1 + lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + lib/librte_pmd_fm10k/Makefile | 100 + lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + lib/librte_pmd_fm10k/fm10k.h | 292 +++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 1867 +++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 462 +++++ lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + mk/rte.app.mk | 4 + 26 files changed, 11473 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c create mode 100644 lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 01/16] fm10k: add base driver 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 02/16] eal: add fm10k device id Chen Jing D(Mark) ` (15 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Base driver is developed and maintained by Intel ND team, includes basic functional service to Intel Ethernet Switch FM10000 Series of silicons. Any suggestion on bug fix and improvement within this directory is welcome, but need this team to change and update. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/base/fm10k_api.c | 341 +++++ lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++++ lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 ++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 +++++ lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++++++++ lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 +++ lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 +++ lib/librte_pmd_fm10k/base/fm10k_type.h | 937 +++++++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++++ lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 ++ 14 files changed, 8617 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.c b/lib/librte_pmd_fm10k/base/fm10k_api.c new file mode 100644 index 0000000..c0f555c --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.c @@ -0,0 +1,341 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_api.h" +#include "fm10k_common.h" + +/** + * fm10k_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +s32 fm10k_set_mac_type(struct fm10k_hw *hw) +{ + s32 ret_val = FM10K_SUCCESS; + + DEBUGFUNC("fm10k_set_mac_type"); + + if (hw->vendor_id != FM10K_INTEL_VENDOR_ID) { + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x\n", hw->vendor_id); + return FM10K_ERR_DEVICE_NOT_SUPPORTED; + } + + switch (hw->device_id) { + case FM10K_DEV_ID_PF: + hw->mac.type = fm10k_mac_pf; + break; + case FM10K_DEV_ID_VF: + hw->mac.type = fm10k_mac_vf; + break; + default: + ret_val = FM10K_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(FM10K_ERROR_UNSUPPORTED, + "Unsupported device id: %x\n", + hw->device_id); + break; + } + + DEBUGOUT2("fm10k_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, ret_val); + + return ret_val; +} + +/** + * fm10k_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This will assign function pointers and assign the MAC type and PHY code. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The fm10k_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +s32 fm10k_init_shared_code(struct fm10k_hw *hw) +{ + s32 status; + + DEBUGFUNC("fm10k_init_shared_code"); + + /* Set the mac type */ + fm10k_set_mac_type(hw); + + switch (hw->mac.type) { + case fm10k_mac_pf: + status = fm10k_init_ops_pf(hw); + break; + case fm10k_mac_vf: + status = fm10k_init_ops_vf(hw); + break; + default: + status = FM10K_ERR_DEVICE_NOT_SUPPORTED; + break; + } + + return status; +} + +#define fm10k_call_func(hw, func, params, error) \ + ((func) ? (func params) : (error)) + +/** + * fm10k_reset_hw - Reset the hardware to known good state + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +s32 fm10k_reset_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.reset_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_init_hw - Initialize the hardware + * @hw: pointer to hardware structure + * + * Initialize the hardware by resetting and then starting the hardware + **/ +s32 fm10k_init_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.init_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_stop_hw - Prepares hardware to shutdown Rx/Tx + * @hw: pointer to hardware structure + * + * Disables Rx/Tx queues and disables the DMA engine. + **/ +s32 fm10k_stop_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.stop_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_start_hw - Prepares hardware for Rx/Tx + * @hw: pointer to hardware structure + * + * This function sets the flags indicating that the hardware is ready to + * begin operation. + **/ +s32 fm10k_start_hw(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.start_hw, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_get_bus_info - Set PCI bus info + * @hw: pointer to hardware structure + * + * Sets the PCI bus info (speed, width, type) within the fm10k_hw structure + **/ +s32 fm10k_get_bus_info(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.get_bus_info, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_is_slot_appropriate - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw) +{ + if (hw->mac.ops.is_slot_appropriate) + return hw->mac.ops.is_slot_appropriate(hw); + return true; +} + +/** + * fm10k_update_vlan - Clear VLAN ID to VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @idx: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. + **/ +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set) +{ + return fm10k_call_func(hw, hw->mac.ops.update_vlan, (hw, vid, idx, set), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_read_mac_addr - Reads MAC address + * @hw: pointer to hardware structure + * + * Reads the MAC address out of the interface and stores it in the HW + * structures. + **/ +s32 fm10k_read_mac_addr(struct fm10k_hw *hw) +{ + return fm10k_call_func(hw, hw->mac.ops.read_mac_addr, (hw), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_hw_stats - Update hw statistics + * @hw: pointer to hardware structure + * + * This function updates statistics that are related to hardware. + * */ +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.update_hw_stats) + hw->mac.ops.update_hw_stats(hw, stats); +} + +/** + * fm10k_rebind_hw_stats - Reset base for hw statistics + * @hw: pointer to hardware structure + * + * This function resets the base for statistics that are related to hardware. + * */ +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats) +{ + if (hw->mac.ops.rebind_hw_stats) + hw->mac.ops.rebind_hw_stats(hw, stats); +} + +/** + * fm10k_configure_dglort_map - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + return fm10k_call_func(hw, hw->mac.ops.configure_dglort_map, + (hw, dglort), FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_set_dma_mask - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function configures the endpoint to limit the access to memory + * beyond what is physically in the system. + **/ +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask) +{ + if (hw->mac.ops.set_dma_mask) + hw->mac.ops.set_dma_mask(hw, dma_mask); +} + +/** + * fm10k_get_fault - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault) +{ + return fm10k_call_func(hw, hw->mac.ops.get_fault, (hw, type, fault), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_uc_addr - Update device unicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses + **/ +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + return fm10k_call_func(hw, hw->mac.ops.update_uc_addr, + (hw, lport, mac, vid, add, flags), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_update_mc_addr - Update device multicast address + * @hw: pointer to the HW structure + * @lport: logical port ID to update - unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses + **/ +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add) +{ + return fm10k_call_func(hw, hw->mac.ops.update_mc_addr, + (hw, lport, mac, vid, add), + FM10K_NOT_IMPLEMENTED); +} + +/** + * fm10k_adjust_systime - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function is meant to update the frequency of the clock represented + * by the SYSTIME register. + **/ +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb) +{ + return fm10k_call_func(hw, hw->mac.ops.adjust_systime, + (hw, ppb), FM10K_NOT_IMPLEMENTED); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_api.h b/lib/librte_pmd_fm10k/base/fm10k_api.h new file mode 100644 index 0000000..343d750 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_api.h @@ -0,0 +1,61 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_API_H_ +#define _FM10K_API_H_ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +s32 fm10k_set_mac_type(struct fm10k_hw *hw); +s32 fm10k_reset_hw(struct fm10k_hw *hw); +s32 fm10k_init_hw(struct fm10k_hw *hw); +s32 fm10k_stop_hw(struct fm10k_hw *hw); +s32 fm10k_start_hw(struct fm10k_hw *hw); +s32 fm10k_init_shared_code(struct fm10k_hw *hw); +s32 fm10k_get_bus_info(struct fm10k_hw *hw); +bool fm10k_is_slot_appropriate(struct fm10k_hw *hw); +s32 fm10k_update_vlan(struct fm10k_hw *hw, u32 vid, u8 idx, bool set); +s32 fm10k_read_mac_addr(struct fm10k_hw *hw); +void fm10k_update_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +void fm10k_rebind_hw_stats(struct fm10k_hw *hw, struct fm10k_hw_stats *stats); +s32 fm10k_configure_dglort_map(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort); +void fm10k_set_dma_mask(struct fm10k_hw *hw, u64 dma_mask); +s32 fm10k_get_fault(struct fm10k_hw *hw, int type, struct fm10k_fault *fault); +s32 fm10k_update_uc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add, u8 flags); +s32 fm10k_update_mc_addr(struct fm10k_hw *hw, u16 lport, + const u8 *mac, u16 vid, bool add); +s32 fm10k_adjust_systime(struct fm10k_hw *hw, s32 ppb); +#endif /* _FM10K_API_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.c b/lib/librte_pmd_fm10k/base/fm10k_common.c new file mode 100644 index 0000000..a90d2f0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.c @@ -0,0 +1,572 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the fm10k_hw structure. + **/ +STATIC s32 fm10k_get_bus_info_generic(struct fm10k_hw *hw) +{ + u16 link_cap, link_status, device_cap, device_control; + + DEBUGFUNC("fm10k_get_bus_info_generic"); + + /* Get the maximum link width and speed from PCIe config space */ + link_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_CAP); + + switch (link_cap & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus_caps.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus_caps.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus_caps.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus_caps.width = fm10k_bus_width_unknown; + break; + } + + switch (link_cap & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus_caps.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus_caps.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus_caps.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus_caps.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the PCIe maximum payload size for the PCIe function */ + device_cap = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CAP); + + switch (device_cap & FM10K_PCIE_DEV_CAP_PAYLOAD) { + case FM10K_PCIE_DEV_CAP_PAYLOAD_128: + hw->bus_caps.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_256: + hw->bus_caps.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CAP_PAYLOAD_512: + hw->bus_caps.payload = fm10k_bus_payload_512; + break; + default: + hw->bus_caps.payload = fm10k_bus_payload_unknown; + break; + } + + /* Get the negotiated link width and speed from PCIe config space */ + link_status = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_LINK_STATUS); + + switch (link_status & FM10K_PCIE_LINK_WIDTH) { + case FM10K_PCIE_LINK_WIDTH_1: + hw->bus.width = fm10k_bus_width_pcie_x1; + break; + case FM10K_PCIE_LINK_WIDTH_2: + hw->bus.width = fm10k_bus_width_pcie_x2; + break; + case FM10K_PCIE_LINK_WIDTH_4: + hw->bus.width = fm10k_bus_width_pcie_x4; + break; + case FM10K_PCIE_LINK_WIDTH_8: + hw->bus.width = fm10k_bus_width_pcie_x8; + break; + default: + hw->bus.width = fm10k_bus_width_unknown; + break; + } + + switch (link_status & FM10K_PCIE_LINK_SPEED) { + case FM10K_PCIE_LINK_SPEED_2500: + hw->bus.speed = fm10k_bus_speed_2500; + break; + case FM10K_PCIE_LINK_SPEED_5000: + hw->bus.speed = fm10k_bus_speed_5000; + break; + case FM10K_PCIE_LINK_SPEED_8000: + hw->bus.speed = fm10k_bus_speed_8000; + break; + default: + hw->bus.speed = fm10k_bus_speed_unknown; + break; + } + + /* Get the negotiated PCIe maximum payload size for the PCIe function */ + device_control = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_DEV_CTRL); + + switch (device_control & FM10K_PCIE_DEV_CTRL_PAYLOAD) { + case FM10K_PCIE_DEV_CTRL_PAYLOAD_128: + hw->bus.payload = fm10k_bus_payload_128; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_256: + hw->bus.payload = fm10k_bus_payload_256; + break; + case FM10K_PCIE_DEV_CTRL_PAYLOAD_512: + hw->bus.payload = fm10k_bus_payload_512; + break; + default: + hw->bus.payload = fm10k_bus_payload_unknown; + break; + } + + return FM10K_SUCCESS; +} + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw) +{ + u16 msix_count; + + DEBUGFUNC("fm10k_get_pcie_msix_count_generic"); + + /* read in value from MSI-X capability register */ + msix_count = FM10K_READ_PCI_WORD(hw, FM10K_PCI_MSIX_MSG_CTRL); + msix_count &= FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK; + + /* MSI-X count is zero-based in HW */ + msix_count++; + + if (msix_count > FM10K_MAX_MSIX_VECTORS) + msix_count = FM10K_MAX_MSIX_VECTORS; + + return msix_count; +} + +/** + * fm10k_init_ops_generic - Inits function ptrs + * @hw: pointer to the hardware structure + * + * Initialize the function pointers. + **/ +s32 fm10k_init_ops_generic(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_generic"); + + /* MAC */ + mac->ops.get_bus_info = &fm10k_get_bus_info_generic; + + /* initialize GLORT state to avoid any false hits */ + mac->dglort_map = FM10K_DGLORTMAP_NONE; + + return FM10K_SUCCESS; +} + +/** + * fm10k_start_hw_generic - Prepare hardware for Tx/Rx + * @hw: pointer to hardware structure + * + * This function sets the Tx ready flag to indicate that the Tx path has + * been initialized. + **/ +s32 fm10k_start_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_start_hw_generic"); + + /* set flag indicating we are beginning Tx */ + hw->mac.tx_ready = true; + + return FM10K_SUCCESS; +} + +/** + * fm10k_disable_queues_generic - Stop Tx/Rx queues + * @hw: pointer to hardware structure + * @q_cnt: number of queues to be disabled + * + **/ +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt) +{ + u32 reg; + u16 i, time; + + DEBUGFUNC("fm10k_disable_queues_generic"); + + /* clear tx_ready to prevent any false hits for reset */ + hw->mac.tx_ready = false; + + /* clear the enable bit for all rings */ + for (i = 0; i < q_cnt; i++) { + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), + reg & ~FM10K_TXDCTL_ENABLE); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), + reg & ~FM10K_RXQCTL_ENABLE); + } + + FM10K_WRITE_FLUSH(hw); + usec_delay(1); + + /* loop through all queues to verify that they are all disabled */ + for (i = 0, time = FM10K_QUEUE_DISABLE_TIMEOUT; time;) { + /* if we are at end of rings all rings are disabled */ + if (i == q_cnt) + return FM10K_SUCCESS; + + /* if queue enables cleared, then move to next ring pair */ + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!~reg || !(reg & FM10K_TXDCTL_ENABLE)) { + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!~reg || !(reg & FM10K_RXQCTL_ENABLE)) { + i++; + continue; + } + } + + /* decrement time and wait 1 usec */ + time--; + if (time) + usec_delay(1); + } + + return FM10K_ERR_REQUESTS_PENDING; +} + +/** + * fm10k_stop_hw_generic - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_stop_hw_generic"); + + return fm10k_disable_queues_generic(hw, hw->mac.max_queues); +} + +/** + * fm10k_read_hw_stats_32b - Reads value of 32-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing a 32-bit value + * + * Function reads the content of the register and returns the delta + * between the base and the current value. + * **/ +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 delta = FM10K_READ_REG(hw, addr) - stat->base_l; + + DEBUGFUNC("fm10k_read_hw_stats_32b"); + + if (FM10K_REMOVED(hw->hw_addr)) + stat->base_h = 0; + + return delta; +} + +/** + * fm10k_read_hw_stats_48b - Reads value of 48-bit registers + * @hw: pointer to the hardware structure + * @addr: address of register containing the lower 32-bit value + * + * Function reads the content of 2 registers, combined to represent a 48-bit + * statistical value. Extra processing is required to handle overflowing. + * Finally, a delta value is returned representing the difference between the + * values stored in registers and values stored in the statistic counters. + * **/ +STATIC u64 fm10k_read_hw_stats_48b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat) +{ + u32 count_l; + u32 count_h; + u32 count_tmp; + u64 delta; + + DEBUGFUNC("fm10k_read_hw_stats_48b"); + + count_h = FM10K_READ_REG(hw, addr + 1); + + /* Check for overflow */ + do { + count_tmp = count_h; + count_l = FM10K_READ_REG(hw, addr); + count_h = FM10K_READ_REG(hw, addr + 1); + } while (count_h != count_tmp); + + delta = ((u64)(count_h - stat->base_h) << 32) + count_l; + delta -= stat->base_l; + + return delta & FM10K_48_BIT_MASK; +} + +/** + * fm10k_update_hw_base_48b - Updates 48-bit statistic base value + * @stat: pointer to the hardware statistic structure + * @delta: value to be updated into the hardware statistic structure + * + * Function receives a value and determines if an update is required based on + * a delta calculation. Only the base value will be updated. + **/ +STATIC void fm10k_update_hw_base_48b(struct fm10k_hw_stat *stat, u64 delta) +{ + DEBUGFUNC("fm10k_update_hw_base_48b"); + + if (!delta) + return; + + /* update lower 32 bits */ + delta += stat->base_l; + stat->base_l = (u32)delta; + + /* update upper 32 bits */ + stat->base_h += (u32)(delta >> 32); +} + +/** + * fm10k_update_hw_stats_tx_q - Updates TX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the TX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_tx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_tx, id_tx_prev, tx_packets; + u64 tx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_tx_q"); + + /* Retrieve TX Owner Data */ + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + + /* Process TX Ring */ + do { + tx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPTC(idx), + &q->tx_packets); + + if (tx_packets) + tx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBTC_L(idx), + &q->tx_bytes); + + /* Re-Check Owner Data */ + id_tx_prev = id_tx; + id_tx = FM10K_READ_REG(hw, FM10K_TXQCTL(idx)); + } while ((id_tx ^ id_tx_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_tx &= FM10K_TXQCTL_ID_MASK; + id_tx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->tx_stats_idx == id_tx) { + q->tx_packets.count += tx_packets; + q->tx_bytes.count += tx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->tx_packets, tx_packets); + fm10k_update_hw_base_48b(&q->tx_bytes, tx_bytes); + + q->tx_stats_idx = id_tx; +} + +/** + * fm10k_update_hw_stats_rx_q - Updates RX queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * + * Function updates the RX queue statistics counters that are related to the + * hardware. + **/ +STATIC void fm10k_update_hw_stats_rx_q(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u32 idx) +{ + u32 id_rx, id_rx_prev, rx_packets, rx_drops; + u64 rx_bytes = 0; + + DEBUGFUNC("fm10k_update_hw_stats_rx_q"); + + /* Retrieve RX Owner Data */ + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + + /* Process RX Ring */ + do { + rx_drops = fm10k_read_hw_stats_32b(hw, FM10K_QPRDC(idx), + &q->rx_drops); + + rx_packets = fm10k_read_hw_stats_32b(hw, FM10K_QPRC(idx), + &q->rx_packets); + + if (rx_packets) + rx_bytes = fm10k_read_hw_stats_48b(hw, + FM10K_QBRC_L(idx), + &q->rx_bytes); + + /* Re-Check Owner Data */ + id_rx_prev = id_rx; + id_rx = FM10K_READ_REG(hw, FM10K_RXQCTL(idx)); + } while ((id_rx ^ id_rx_prev) & FM10K_RXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id_rx &= FM10K_RXQCTL_ID_MASK; + id_rx |= FM10K_STAT_VALID; + + /* update packet counts */ + if (q->rx_stats_idx == id_rx) { + q->rx_drops.count += rx_drops; + q->rx_packets.count += rx_packets; + q->rx_bytes.count += rx_bytes; + } + + /* update bases and record ID */ + fm10k_update_hw_base_32b(&q->rx_drops, rx_drops); + fm10k_update_hw_base_32b(&q->rx_packets, rx_packets); + fm10k_update_hw_base_48b(&q->rx_bytes, rx_bytes); + + q->rx_stats_idx = id_rx; +} + +/** + * fm10k_update_hw_stats_q - Updates queue statistics counters + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function updates the queue statistics counters that are related to the + * hardware. + **/ +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count) +{ + u32 i; + + DEBUGFUNC("fm10k_update_hw_stats_q"); + + for (i = 0; i < count; i++, idx++, q++) { + fm10k_update_hw_stats_tx_q(hw, q, idx); + fm10k_update_hw_stats_rx_q(hw, q, idx); + } +} + +/** + * fm10k_unbind_hw_stats_q - Unbind the queue counters from their queues + * @hw: pointer to the hardware structure + * @q: pointer to the ring of hardware statistics queue + * @idx: index pointing to the start of the ring iteration + * @count: number of queues to iterate over + * + * Function invalidates the index values for the queues so any updates that + * may have happened are ignored and the base for the queue stats is reset. + **/ +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count) +{ + u32 i; + + for (i = 0; i < count; i++, idx++, q++) { + q->rx_stats_idx = 0; + q->tx_stats_idx = 0; + } +} + +/** + * fm10k_get_host_state_generic - Returns the state of the host + * @hw: pointer to hardware structure + * @host_ready: pointer to boolean value that will record host state + * + * This function will check the health of the mailbox and Tx queue 0 + * in order to determine if we should report that the link is up or not. + **/ +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_info *mac = &hw->mac; + s32 ret_val = FM10K_SUCCESS; + u32 txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(0)); + + DEBUGFUNC("fm10k_get_host_state_generic"); + + /* process upstream mailbox in case interrupts were disabled */ + mbx->ops.process(hw, mbx); + + /* If Tx is no longer enabled link should come down */ + if (!(~txdctl) || !(txdctl & FM10K_TXDCTL_ENABLE)) + mac->get_host_state = true; + + /* exit if not checking for link, or link cannot be changed */ + if (!mac->get_host_state || !(~txdctl)) + goto out; + + /* if we somehow dropped the Tx enable we should reset */ + if (hw->mac.tx_ready && !(txdctl & FM10K_TXDCTL_ENABLE)) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* if Mailbox timed out we should request reset */ + if (!mbx->timeout) { + ret_val = FM10K_ERR_RESET_REQUESTED; + goto out; + } + + /* verify Mailbox is still valid */ + if (!mbx->ops.tx_ready(mbx, FM10K_VFMBX_MSG_MTU)) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (mac->dglort_map == FM10K_DGLORTMAP_NONE) + goto out; + + /* if we passed all the tests above then the switch is ready and we no + * longer need to check for link + */ + mac->get_host_state = false; + +out: + *host_ready = !mac->get_host_state; + return ret_val; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_common.h b/lib/librte_pmd_fm10k/base/fm10k_common.h new file mode 100644 index 0000000..45fbbc0 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_common.h @@ -0,0 +1,52 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_COMMON_H_ +#define _FM10K_COMMON_H_ + +#include "fm10k_type.h" + +u16 fm10k_get_pcie_msix_count_generic(struct fm10k_hw *hw); +s32 fm10k_init_ops_generic(struct fm10k_hw *hw); +s32 fm10k_disable_queues_generic(struct fm10k_hw *hw, u16 q_cnt); +s32 fm10k_start_hw_generic(struct fm10k_hw *hw); +s32 fm10k_stop_hw_generic(struct fm10k_hw *hw); +u32 fm10k_read_hw_stats_32b(struct fm10k_hw *hw, u32 addr, + struct fm10k_hw_stat *stat); +#define fm10k_update_hw_base_32b(stat, delta) ((stat)->base_l += (delta)) +void fm10k_update_hw_stats_q(struct fm10k_hw *hw, struct fm10k_hw_stats_q *q, + u32 idx, u32 count); +#define fm10k_unbind_hw_stats_32b(s) ((s)->base_h = 0) +void fm10k_unbind_hw_stats_q(struct fm10k_hw_stats_q *q, u32 idx, u32 count); +s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready); +#endif /* _FM10K_COMMON_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.c b/lib/librte_pmd_fm10k/base/fm10k_mbx.c new file mode 100644 index 0000000..2081414 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.c @@ -0,0 +1,2185 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_common.h" + +/** + * fm10k_fifo_init - Initialize a message FIFO + * @fifo: pointer to FIFO + * @buffer: pointer to memory to be used to store FIFO + * @size: maximum message size to store in FIFO, must be 2^n - 1 + **/ +STATIC void fm10k_fifo_init(struct fm10k_mbx_fifo *fifo, u32 *buffer, u16 size) +{ + fifo->buffer = buffer; + fifo->size = size; + fifo->head = 0; + fifo->tail = 0; +} + +/** + * fm10k_fifo_used - Retrieve used space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of DWORDs used in the FIFO + **/ +STATIC u16 fm10k_fifo_used(struct fm10k_mbx_fifo *fifo) +{ + return fifo->tail - fifo->head; +} + +/** + * fm10k_fifo_unused - Retrieve unused space in FIFO + * @fifo: pointer to FIFO + * + * This function returns the number of unused DWORDs in the FIFO + **/ +STATIC u16 fm10k_fifo_unused(struct fm10k_mbx_fifo *fifo) +{ + return fifo->size + fifo->head - fifo->tail; +} + +/** + * fm10k_fifo_empty - Test to verify if fifo is empty + * @fifo: pointer to FIFO + * + * This function returns true if the FIFO is empty, else false + **/ +STATIC bool fm10k_fifo_empty(struct fm10k_mbx_fifo *fifo) +{ + return fifo->head == fifo->tail; +} + +/** + * fm10k_fifo_head_offset - returns indices of head with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to head + * + * This function returns the indices into the fifo based on head + offset + **/ +STATIC u16 fm10k_fifo_head_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->head + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_tail_offset - returns indices of tail with given offset + * @fifo: pointer to FIFO + * @offset: offset to add to tail + * + * This function returns the indices into the fifo based on tail + offset + **/ +STATIC u16 fm10k_fifo_tail_offset(struct fm10k_mbx_fifo *fifo, u16 offset) +{ + return (fifo->tail + offset) & (fifo->size - 1); +} + +/** + * fm10k_fifo_head_len - Retrieve length of first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the first message in the FIFO + **/ +STATIC u16 fm10k_fifo_head_len(struct fm10k_mbx_fifo *fifo) +{ + u32 *head = fifo->buffer + fm10k_fifo_head_offset(fifo, 0); + + /* verify there is at least 1 DWORD in the fifo so *head is valid */ + if (fm10k_fifo_empty(fifo)) + return 0; + + /* retieve the message length */ + return FM10K_TLV_DWORD_LEN(*head); +} + +/** + * fm10k_fifo_head_drop - Drop the first message in FIFO + * @fifo: pointer to FIFO + * + * This function returns the size of the message dropped from the FIFO + **/ +STATIC u16 fm10k_fifo_head_drop(struct fm10k_mbx_fifo *fifo) +{ + u16 len = fm10k_fifo_head_len(fifo); + + /* update head so it is at the start of next frame */ + fifo->head += len; + + return len; +} + +/** + * fm10k_mbx_index_len - Convert a head/tail index into a length value + * @mbx: pointer to mailbox + * @head: head index + * @tail: head index + * + * This function takes the head and tail index and determines the length + * of the data indicated by this pair. + **/ +STATIC u16 fm10k_mbx_index_len(struct fm10k_mbx_info *mbx, u16 head, u16 tail) +{ + u16 len = tail - head; + + /* we wrapped so subtract 2, one for index 0, one for all 1s index */ + if (len > tail) + len -= 2; + + return len & ((mbx->mbmem_len << 1) - 1); +} + +/** + * fm10k_mbx_tail_add - Determine new tail value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (tail > mbx->tail) ? --tail : ++tail; +} + +/** + * fm10k_mbx_tail_sub - Determine new tail value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local tail index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_tail_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 tail = (mbx->tail - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (tail < mbx->tail) ? ++tail : --tail; +} + +/** + * fm10k_mbx_head_add - Determine new head value with added offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_add(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head + offset + 1) & ((mbx->mbmem_len << 1) - 1); + + /* add/sub 1 because we cannot have offset 0 or all 1s */ + return (head > mbx->head) ? --head : ++head; +} + +/** + * fm10k_mbx_head_sub - Determine new head value with subtracted offset + * @mbx: pointer to mailbox + * @offset: length to add to head offset + * + * This function takes the local head index and recomputes it for + * a given length added as an offset. + **/ +STATIC u16 fm10k_mbx_head_sub(struct fm10k_mbx_info *mbx, u16 offset) +{ + u16 head = (mbx->head - offset - 1) & ((mbx->mbmem_len << 1) - 1); + + /* sub/add 1 because we cannot have offset 0 or all 1s */ + return (head < mbx->head) ? ++head : --head; +} + +/** + * fm10k_mbx_pushed_tail_len - Retrieve the length of message being pushed + * @mbx: pointer to mailbox + * + * This function will return the length of the message currently being + * pushed onto the tail of the Rx queue. + **/ +STATIC u16 fm10k_mbx_pushed_tail_len(struct fm10k_mbx_info *mbx) +{ + u32 *tail = mbx->rx.buffer + fm10k_fifo_tail_offset(&mbx->rx, 0); + + /* pushed tail is only valid if pushed is set */ + if (!mbx->pushed) + return 0; + + return FM10K_TLV_DWORD_LEN(*tail); +} + +/** + * fm10k_fifo_write_copy - pulls data off of msg and places it in fifo + * @fifo: pointer to FIFO + * @msg: message array to populate + * @tail_offset: additional offset to add to tail pointer + * @len: length of FIFO to copy into message header + * + * This function will take a message and copy it into a section of the + * FIFO. In order to get something into a location other than just + * the tail you can use tail_offset to adjust the pointer. + **/ +STATIC void fm10k_fifo_write_copy(struct fm10k_mbx_fifo *fifo, + const u32 *msg, u16 tail_offset, u16 len) +{ + u16 end = fm10k_fifo_tail_offset(fifo, tail_offset); + u32 *tail = fifo->buffer + end; + + /* track when we should cross the end of the FIFO */ + end = fifo->size - end; + + /* copy end of message before start of message */ + if (end < len) + memcpy(fifo->buffer, msg + end, (len - end) << 2); + else + end = len; + + /* Copy remaining message into Tx FIFO */ + memcpy(tail, msg, end << 2); +} + +/** + * fm10k_fifo_enqueue - Enqueues the message to the tail of the FIFO + * @fifo: pointer to FIFO + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_fifo_enqueue(struct fm10k_mbx_fifo *fifo, const u32 *msg) +{ + u16 len = FM10K_TLV_DWORD_LEN(*msg); + + DEBUGFUNC("fm10k_fifo_enqueue"); + + /* verify parameters */ + if (len > fifo->size) + return FM10K_MBX_ERR_SIZE; + + /* verify there is room for the message */ + if (len > fm10k_fifo_unused(fifo)) + return FM10K_MBX_ERR_NO_SPACE; + + /* Copy message into FIFO */ + fm10k_fifo_write_copy(fifo, msg, 0, len); + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); + + /* Update Tx FIFO tail */ + fifo->tail += len; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_msg_size - Validate incoming message based on size + * @mbx: pointer to mailbox + * @len: length of data pushed onto buffer + * + * This function analyzes the frame and will return a non-zero value when + * the start of a message larger than the mailbox is detected. + **/ +STATIC u16 fm10k_mbx_validate_msg_size(struct fm10k_mbx_info *mbx, u16 len) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 total_len = 0, msg_len; + u32 *msg; + + DEBUGFUNC("fm10k_mbx_validate_msg"); + + /* length should include previous amounts pushed */ + len += mbx->pushed; + + /* offset in message is based off of current message size */ + do { + msg = fifo->buffer + fm10k_fifo_tail_offset(fifo, total_len); + msg_len = FM10K_TLV_DWORD_LEN(*msg); + total_len += msg_len; + } while (total_len < len); + + /* message extends out of pushed section, but fits in FIFO */ + if ((len < total_len) && (msg_len <= mbx->rx.size)) + return 0; + + /* return length of invalid section */ + return (len < total_len) ? len : (len - total_len); +} + +/** + * fm10k_mbx_write_copy - pulls data off of Tx FIFO and places it in mbmem + * @mbx: pointer to mailbox + * + * This function will take a section of the Rx FIFO and copy it into the + mbx->tail--; + * mailbox memory. The offset in mbmem is based on the lower bits of the + * tail and len determines the length to copy. + **/ +STATIC void fm10k_mbx_write_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u32 mbmem = mbx->mbmem_reg; + u32 *head = fifo->buffer; + u16 end, len, tail, mask; + + DEBUGFUNC("fm10k_mbx_write_copy"); + + if (!mbx->tail_len) + return; + + /* determine data length and mbmem tail index */ + mask = mbx->mbmem_len - 1; + len = mbx->tail_len; + tail = fm10k_mbx_tail_sub(mbx, len); + if (tail > mask) + tail++; + + /* determine offset in the ring */ + end = fm10k_fifo_head_offset(fifo, mbx->pulled); + head += end; + + /* memory barrier to guarantee data is ready to be read */ + FM10K_RMB(); + + /* Copy message from Tx FIFO */ + for (end = fifo->size - end; len; head = fifo->buffer) { + do { + /* adjust tail to match offset for FIFO */ + tail &= mask; + if (!tail) + tail++; + + /* write message to hardware FIFO */ + FM10K_WRITE_MBX(hw, mbmem + tail++, *(head++)); + } while (--len && --end); + } +} + +/** + * fm10k_mbx_pull_head - Pulls data off of head of Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @head: acknowledgement number last received + * + * This function will push the tail index forward based on the remote + * head index. It will then pull up to mbmem_len DWORDs off of the + * head of the FIFO and will place it in the MBMEM registers + * associated with the mailbox. + **/ +STATIC void fm10k_mbx_pull_head(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + u16 mbmem_len, len, ack = fm10k_mbx_index_len(mbx, head, mbx->tail); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + + /* update number of bytes pulled and update bytes in transit */ + mbx->pulled += mbx->tail_len - ack; + + /* determine length of data to pull, reserve space for mbmem header */ + mbmem_len = mbx->mbmem_len - 1; + len = fm10k_fifo_used(fifo) - mbx->pulled; + if (len > mbmem_len) + len = mbmem_len; + + /* update tail and record number of bytes in transit */ + mbx->tail = fm10k_mbx_tail_add(mbx, len - ack); + mbx->tail_len = len; + + /* drop pulled messages from the FIFO */ + for (len = fm10k_fifo_head_len(fifo); + len && (mbx->pulled >= len); + len = fm10k_fifo_head_len(fifo)) { + mbx->pulled -= fm10k_fifo_head_drop(fifo); + mbx->tx_messages++; + mbx->tx_dwords += len; + } + + /* Copy message out from the Tx FIFO */ + fm10k_mbx_write_copy(hw, mbx); +} + +/** + * fm10k_mbx_read_copy - pulls data off of mbmem and places it in Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will take a section of the mailbox memory and copy it + * into the Rx FIFO. The offset is based on the lower bits of the + * head and len determines the length to copy. + **/ +STATIC void fm10k_mbx_read_copy(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u32 mbmem = mbx->mbmem_reg ^ mbx->mbmem_len; + u32 *tail = fifo->buffer; + u16 end, len, head; + + DEBUGFUNC("fm10k_mbx_read_copy"); + + /* determine data length and mbmem head index */ + len = mbx->head_len; + head = fm10k_mbx_head_sub(mbx, len); + if (head >= mbx->mbmem_len) + head++; + + /* determine offset in the ring */ + end = fm10k_fifo_tail_offset(fifo, mbx->pushed); + tail += end; + + /* Copy message into Rx FIFO */ + for (end = fifo->size - end; len; tail = fifo->buffer) { + do { + /* adjust head to match offset for FIFO */ + head &= mbx->mbmem_len - 1; + if (!head) + head++; + + /* read message from hardware FIFO */ + *(tail++) = FM10K_READ_MBX(hw, mbmem + head++); + } while (--len && --end); + } + + /* memory barrier to guarantee FIFO is written before tail update */ + FM10K_WMB(); +} + +/** + * fm10k_mbx_push_tail - Pushes up to 15 DWORDs on to tail of FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @tail: tail index of message + * + * This function will first validate the tail index and size for the + * incoming message. It then updates the acknowledgment number and + * copies the data into the FIFO. It will return the number of messages + * dequeued on success and a negative value on error. + **/ +STATIC s32 fm10k_mbx_push_tail(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len, seq = fm10k_mbx_index_len(mbx, mbx->head, tail); + + DEBUGFUNC("fm10k_mbx_push_tail"); + + /* determine length of data to push */ + len = fm10k_fifo_unused(fifo) - mbx->pushed; + if (len > seq) + len = seq; + + /* update head and record bytes received */ + mbx->head = fm10k_mbx_head_add(mbx, len); + mbx->head_len = len; + + /* nothing to do if there is no data */ + if (!len) + return FM10K_SUCCESS; + + /* Copy msg into Rx FIFO */ + fm10k_mbx_read_copy(hw, mbx); + + /* determine if there are any invalid lengths in message */ + if (fm10k_mbx_validate_msg_size(mbx, len)) + return FM10K_MBX_ERR_SIZE; + + /* Update pushed */ + mbx->pushed += len; + + /* flush any completed messages */ + for (len = fm10k_mbx_pushed_tail_len(mbx); + len && (mbx->pushed >= len); + len = fm10k_mbx_pushed_tail_len(mbx)) { + fifo->tail += len; + mbx->pushed -= len; + mbx->rx_messages++; + mbx->rx_dwords += len; + } + + return FM10K_SUCCESS; +} + +/* pre-generated data for generating the CRC based on the poly 0xAC9A. */ +static const u16 fm10k_crc_16b_table[256] = { + 0x0000, 0x7956, 0xF2AC, 0x8BFA, 0xBC6D, 0xC53B, 0x4EC1, 0x3797, + 0x21EF, 0x58B9, 0xD343, 0xAA15, 0x9D82, 0xE4D4, 0x6F2E, 0x1678, + 0x43DE, 0x3A88, 0xB172, 0xC824, 0xFFB3, 0x86E5, 0x0D1F, 0x7449, + 0x6231, 0x1B67, 0x909D, 0xE9CB, 0xDE5C, 0xA70A, 0x2CF0, 0x55A6, + 0x87BC, 0xFEEA, 0x7510, 0x0C46, 0x3BD1, 0x4287, 0xC97D, 0xB02B, + 0xA653, 0xDF05, 0x54FF, 0x2DA9, 0x1A3E, 0x6368, 0xE892, 0x91C4, + 0xC462, 0xBD34, 0x36CE, 0x4F98, 0x780F, 0x0159, 0x8AA3, 0xF3F5, + 0xE58D, 0x9CDB, 0x1721, 0x6E77, 0x59E0, 0x20B6, 0xAB4C, 0xD21A, + 0x564D, 0x2F1B, 0xA4E1, 0xDDB7, 0xEA20, 0x9376, 0x188C, 0x61DA, + 0x77A2, 0x0EF4, 0x850E, 0xFC58, 0xCBCF, 0xB299, 0x3963, 0x4035, + 0x1593, 0x6CC5, 0xE73F, 0x9E69, 0xA9FE, 0xD0A8, 0x5B52, 0x2204, + 0x347C, 0x4D2A, 0xC6D0, 0xBF86, 0x8811, 0xF147, 0x7ABD, 0x03EB, + 0xD1F1, 0xA8A7, 0x235D, 0x5A0B, 0x6D9C, 0x14CA, 0x9F30, 0xE666, + 0xF01E, 0x8948, 0x02B2, 0x7BE4, 0x4C73, 0x3525, 0xBEDF, 0xC789, + 0x922F, 0xEB79, 0x6083, 0x19D5, 0x2E42, 0x5714, 0xDCEE, 0xA5B8, + 0xB3C0, 0xCA96, 0x416C, 0x383A, 0x0FAD, 0x76FB, 0xFD01, 0x8457, + 0xAC9A, 0xD5CC, 0x5E36, 0x2760, 0x10F7, 0x69A1, 0xE25B, 0x9B0D, + 0x8D75, 0xF423, 0x7FD9, 0x068F, 0x3118, 0x484E, 0xC3B4, 0xBAE2, + 0xEF44, 0x9612, 0x1DE8, 0x64BE, 0x5329, 0x2A7F, 0xA185, 0xD8D3, + 0xCEAB, 0xB7FD, 0x3C07, 0x4551, 0x72C6, 0x0B90, 0x806A, 0xF93C, + 0x2B26, 0x5270, 0xD98A, 0xA0DC, 0x974B, 0xEE1D, 0x65E7, 0x1CB1, + 0x0AC9, 0x739F, 0xF865, 0x8133, 0xB6A4, 0xCFF2, 0x4408, 0x3D5E, + 0x68F8, 0x11AE, 0x9A54, 0xE302, 0xD495, 0xADC3, 0x2639, 0x5F6F, + 0x4917, 0x3041, 0xBBBB, 0xC2ED, 0xF57A, 0x8C2C, 0x07D6, 0x7E80, + 0xFAD7, 0x8381, 0x087B, 0x712D, 0x46BA, 0x3FEC, 0xB416, 0xCD40, + 0xDB38, 0xA26E, 0x2994, 0x50C2, 0x6755, 0x1E03, 0x95F9, 0xECAF, + 0xB909, 0xC05F, 0x4BA5, 0x32F3, 0x0564, 0x7C32, 0xF7C8, 0x8E9E, + 0x98E6, 0xE1B0, 0x6A4A, 0x131C, 0x248B, 0x5DDD, 0xD627, 0xAF71, + 0x7D6B, 0x043D, 0x8FC7, 0xF691, 0xC106, 0xB850, 0x33AA, 0x4AFC, + 0x5C84, 0x25D2, 0xAE28, 0xD77E, 0xE0E9, 0x99BF, 0x1245, 0x6B13, + 0x3EB5, 0x47E3, 0xCC19, 0xB54F, 0x82D8, 0xFB8E, 0x7074, 0x0922, + 0x1F5A, 0x660C, 0xEDF6, 0x94A0, 0xA337, 0xDA61, 0x519B, 0x28CD }; + +/** + * fm10k_crc_16b - Generate a 16 bit CRC for a region of 16 bit data + * @data: pointer to data to process + * @seed: seed value for CRC + * @len: length measured in 16 bits words + * + * This function will generate a CRC based on the polynomial 0xAC9A and + * whatever value is stored in the seed variable. Note that this + * value inverts the local seed and the result in order to capture all + * leading and trailing zeros. + */ +STATIC u16 fm10k_crc_16b(const u32 *data, u16 seed, u16 len) +{ + u32 result = seed; + + while (len--) { + result ^= *(data++); + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + + if (!(len--)) + break; + + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + result = (result >> 8) ^ fm10k_crc_16b_table[result & 0xFF]; + } + + return (u16)result; +} + +/** + * fm10k_fifo_crc - generate a CRC based off of FIFO data + * @fifo: pointer to FIFO + * @offset: offset point for start of FIFO + * @len: number of DWORDS words to process + * @seed: seed value for CRC + * + * This function generates a CRC for some region of the FIFO + **/ +STATIC u16 fm10k_fifo_crc(struct fm10k_mbx_fifo *fifo, u16 offset, + u16 len, u16 seed) +{ + u32 *data = fifo->buffer + offset; + + /* track when we should cross the end of the FIFO */ + offset = fifo->size - offset; + + /* if we are in 2 blocks process the end of the FIFO first */ + if (offset < len) { + seed = fm10k_crc_16b(data, seed, offset * 2); + data = fifo->buffer; + len -= offset; + } + + /* process any remaining bits */ + return fm10k_crc_16b(data, seed, len * 2); +} + +/** + * fm10k_mbx_update_local_crc - Update the local CRC for outgoing data + * @mbx: pointer to mailbox + * @head: head index provided by remote mailbox + * + * This function will generate the CRC for all data from the end of the + * last head update to the current one. It uses the result of the + * previous CRC as the seed for this update. The result is stored in + * mbx->local. + **/ +STATIC void fm10k_mbx_update_local_crc(struct fm10k_mbx_info *mbx, u16 head) +{ + u16 len = mbx->tail_len - fm10k_mbx_index_len(mbx, head, mbx->tail); + + /* determine the offset for the start of the region to be pulled */ + head = fm10k_fifo_head_offset(&mbx->tx, mbx->pulled); + + /* update local CRC to include all of the pulled data */ + mbx->local = fm10k_fifo_crc(&mbx->tx, head, len, mbx->local); +} + +/** + * fm10k_mbx_verify_remote_crc - Verify the CRC is correct for current data + * @mbx: pointer to mailbox + * + * This function will take all data that has been provided from the remote + * end and generate a CRC for it. This is stored in mbx->remote. The + * CRC for the header is then computed and if the result is non-zero this + * is an error and we signal an error dropping all data and resetting the + * connection. + */ +STATIC s32 fm10k_mbx_verify_remote_crc(struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + u16 len = mbx->head_len; + u16 offset = fm10k_fifo_tail_offset(fifo, mbx->pushed) - len; + u16 crc; + + /* update the remote CRC if new data has been received */ + if (len) + mbx->remote = fm10k_fifo_crc(fifo, offset, len, mbx->remote); + + /* process the full header as we have to validate the CRC */ + crc = fm10k_crc_16b(&mbx->mbx_hdr, mbx->remote, 1); + + /* notify other end if we have a problem */ + return crc ? FM10K_MBX_ERR_CRC : FM10K_SUCCESS; +} + +/** + * fm10k_mbx_rx_ready - Indicates that a message is ready in the Rx FIFO + * @mbx: pointer to mailbox + * + * This function returns true if there is a message in the Rx FIFO to dequeue. + **/ +STATIC bool fm10k_mbx_rx_ready(struct fm10k_mbx_info *mbx) +{ + u16 msg_size = fm10k_fifo_head_len(&mbx->rx); + + return msg_size && (fm10k_fifo_used(&mbx->rx) >= msg_size); +} + +/** + * fm10k_mbx_tx_ready - Indicates that the mailbox is in state ready for Tx + * @mbx: pointer to mailbox + * @len: verify free space is >= this value + * + * This function returns true if the mailbox is in a state ready to transmit. + **/ +STATIC bool fm10k_mbx_tx_ready(struct fm10k_mbx_info *mbx, u16 len) +{ + u16 fifo_unused = fm10k_fifo_unused(&mbx->tx); + + return (mbx->state == FM10K_STATE_OPEN) && (fifo_unused >= len); +} + +/** + * fm10k_mbx_tx_complete - Indicates that the Tx FIFO has been emptied + * @mbx: pointer to mailbox + * + * This function returns true if the Tx FIFO is empty. + **/ +STATIC bool fm10k_mbx_tx_complete(struct fm10k_mbx_info *mbx) +{ + return fm10k_fifo_empty(&mbx->tx); +} + +/** + * fm10k_mbx_deqeueue_rx - Dequeues the message from the head in the Rx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function dequeues messages and hands them off to the tlv parser. + * It will return the number of messages processed when called. + **/ +STATIC u16 fm10k_mbx_dequeue_rx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_mbx_fifo *fifo = &mbx->rx; + s32 err; + u16 cnt; + + /* parse Rx messages out of the Rx FIFO to empty it */ + for (cnt = 0; !fm10k_fifo_empty(fifo); cnt++) { + err = fm10k_tlv_msg_parse(hw, fifo->buffer + fifo->head, + mbx, mbx->msg_data); + if (err < 0) + mbx->rx_parse_err++; + + fm10k_fifo_head_drop(fifo); + } + + /* shift remaining bytes back to start of FIFO */ + memmove(fifo->buffer, fifo->buffer + fifo->tail, mbx->pushed << 2); + + /* shift head and tail based on the memory we moved */ + fifo->tail -= fifo->head; + fifo->head = 0; + + return cnt; +} + +/** + * fm10k_mbx_enqueue_tx - Enqueues the message to the tail of the Tx FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function enqueues a message up to the size specified by the length + * contained in the first DWORD of the message and will place at the tail + * of the FIFO. It will return 0 on success, or a negative value on error. + **/ +STATIC s32 fm10k_mbx_enqueue_tx(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, const u32 *msg) +{ + u32 countdown = mbx->timeout; + s32 err; + + switch (mbx->state) { + case FM10K_STATE_CLOSED: + case FM10K_STATE_DISCONNECT: + return FM10K_MBX_ERR_NO_MBX; + default: + break; + } + + /* enqueue the message on the Tx FIFO */ + err = fm10k_fifo_enqueue(&mbx->tx, msg); + + /* if it failed give the FIFO a chance to drain */ + while (err && countdown) { + countdown--; + usec_delay(mbx->usec_delay); + mbx->ops.process(hw, mbx); + err = fm10k_fifo_enqueue(&mbx->tx, msg); + } + + /* if we failed treat the error */ + if (err) { + mbx->timeout = 0; + mbx->tx_busy++; + } + + /* begin processing message, ignore errors as this is just meant + * to start the mailbox flow so we are not concerned if there + * is a bad error, or the mailbox is already busy with a request + */ + if (!mbx->tail_len) + mbx->ops.process(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_read - Copies the mbmem to local message buffer + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the mbmem to the message array + **/ +STATIC s32 fm10k_mbx_read(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_read"); + + /* only allow one reader in here at a time */ + if (mbx->mbx_hdr) + return FM10K_MBX_ERR_BUSY; + + /* read to capture initial interrupt bits */ + if (FM10K_READ_MBX(hw, mbx->mbx_reg) & FM10K_MBX_REQ_INTERRUPT) + mbx->mbx_lock = FM10K_MBX_ACK; + + /* write back interrupt bits to clear */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, + FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT); + + /* read remote header */ + mbx->mbx_hdr = FM10K_READ_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_write - Copies the local message buffer to mbmem + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function copies the message from the the message array to mbmem + **/ +STATIC void fm10k_mbx_write(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + u32 mbmem = mbx->mbmem_reg; + + DEBUGFUNC("fm10k_mbx_write"); + + /* write new msg header to notify recipient of change */ + FM10K_WRITE_MBX(hw, mbmem, mbx->mbx_hdr); + + /* write mailbox to send interrupt */ + if (mbx->mbx_lock) + FM10K_WRITE_MBX(hw, mbx->mbx_reg, mbx->mbx_lock); + + /* we no longer are using the header so free it */ + mbx->mbx_hdr = 0; + mbx->mbx_lock = 0; +} + +/** + * fm10k_mbx_create_connect_hdr - Generate a connect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx) +{ + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_CONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD) | + FM10K_MSG_HDR_FIELD_SET(mbx->rx.size - 1, CONNECT_SIZE); +} + +/** + * fm10k_mbx_create_data_hdr - Generate a data mailbox header + * @mbx: pointer to mailbox + * + * This function returns a data mailbox header + **/ +STATIC void fm10k_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DATA, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + struct fm10k_mbx_fifo *fifo = &mbx->tx; + u16 crc; + + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + /* generate CRC for data in flight and header */ + crc = fm10k_fifo_crc(fifo, fm10k_fifo_head_offset(fifo, mbx->pulled), + mbx->tail_len, mbx->local); + crc = fm10k_crc_16b(&hdr, crc, 1); + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_disconnect_hdr - Generate a disconnect mailbox header + * @mbx: pointer to mailbox + * + * This function returns a disconnect mailbox header + **/ +STATIC void fm10k_mbx_create_disconnect_hdr(struct fm10k_mbx_info *mbx) +{ + u32 hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_DISCONNECT, TYPE) | + FM10K_MSG_HDR_FIELD_SET(mbx->tail, TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); + u16 crc = fm10k_crc_16b(&hdr, mbx->local, 1); + + mbx->mbx_lock |= FM10K_MBX_ACK; + + /* load header to memory to be written */ + mbx->mbx_hdr = hdr | FM10K_MSG_HDR_FIELD_SET(crc, CRC); +} + +/** + * fm10k_mbx_create_error_msg - Generate a error message + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may shift the message by 1 DWORD and then place an error header + * at the start of the message. + **/ +STATIC void fm10k_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_TYPE: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + case FM10K_MBX_ERR_CRC: + break; + default: + return; + } + + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(FM10K_MSG_ERROR, TYPE) | + FM10K_MSG_HDR_FIELD_SET(err, ERR_NO) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, HEAD); +} + +/** + * fm10k_mbx_validate_msg_hdr - Validate common fields in the message header + * @mbx: pointer to mailbox + * @msg: message array to read + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized type, invalid route, or a malformed message. + **/ +STATIC s32 fm10k_mbx_validate_msg_hdr(struct fm10k_mbx_info *mbx) +{ + u16 type, rsvd0, head, tail, size; + const u32 *hdr = &mbx->mbx_hdr; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + type = FM10K_MSG_HDR_FIELD_GET(*hdr, TYPE); + rsvd0 = FM10K_MSG_HDR_FIELD_GET(*hdr, RSVD0); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + + if (rsvd0) + return FM10K_MBX_ERR_RSVD0; + + switch (type) { + case FM10K_MSG_DISCONNECT: + /* validate that all data has been received */ + if (tail != mbx->head) + return FM10K_MBX_ERR_TAIL; + + /* fall through */ + case FM10K_MSG_DATA: + /* validate that head is moving correctly */ + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + + /* validate that tail is moving correctly */ + if (!tail || (tail == FM10K_MSG_HDR_MASK(TAIL))) + return FM10K_MBX_ERR_TAIL; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + + return FM10K_MBX_ERR_TAIL; + case FM10K_MSG_CONNECT: + /* validate size is in range and is power of 2 mask */ + if ((size < FM10K_VFMBX_MSG_MTU) || (size & (size + 1))) + return FM10K_MBX_ERR_SIZE; + + /* fall through */ + case FM10K_MSG_ERROR: + if (!head || (head == FM10K_MSG_HDR_MASK(HEAD))) + return FM10K_MBX_ERR_HEAD; + /* neither create nor error include a tail offset */ + if (tail) + return FM10K_MBX_ERR_TAIL; + + break; + default: + return FM10K_MBX_ERR_TYPE; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC s32 fm10k_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* update our checksum for the outgoing data */ + fm10k_mbx_update_local_crc(mbx, head); + + /* as long as other end recognizes us keep sending data */ + fm10k_mbx_pull_head(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) + fm10k_mbx_create_data_hdr(mbx); + else + fm10k_mbx_create_disconnect_hdr(mbx); + break; + case FM10K_STATE_CONNECT: + /* send disconnect even if we aren't connected */ + fm10k_mbx_create_connect_hdr(mbx); + break; + case FM10K_STATE_CLOSED: + /* generate new header based on data */ + fm10k_mbx_create_disconnect_hdr(mbx); + default: + break; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_reset_work- Reset internal pointers for any pending work + * @mbx: pointer to mailbox + * + * This function will reset all internal pointers so any work in progress + * is dropped. This call should occur every time we transition from the + * open state to the connect state. + **/ +STATIC void fm10k_mbx_reset_work(struct fm10k_mbx_info *mbx) +{ + /* reset our outgoing max size back to Rx limits */ + mbx->max_size = mbx->rx.size - 1; + + /* just do a quick resysnc to start of message */ + mbx->pushed = 0; + mbx->pulled = 0; + mbx->tail_len = 0; + mbx->head_len = 0; + mbx->rx.tail = 0; + mbx->rx.head = 0; +} + +/** + * fm10k_mbx_update_max_size - Update the max_size and drop any large messages + * @mbx: pointer to mailbox + * @size: new value for max_size + * + * This function will update the max_size value and drop any outgoing messages + * from the head of the Tx FIFO that are larger than max_size. + **/ +STATIC void fm10k_mbx_update_max_size(struct fm10k_mbx_info *mbx, u16 size) +{ + u16 len; + + DEBUGFUNC("fm10k_mbx_update_max_size_hdr"); + + mbx->max_size = size; + + /* flush any oversized messages from the queue */ + for (len = fm10k_fifo_head_len(&mbx->tx); + len > size; + len = fm10k_fifo_head_len(&mbx->tx)) { + fm10k_fifo_head_drop(&mbx->tx); + mbx->tx_dropped++; + } +} + +/** + * fm10k_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to either a disconnected state + * or a connect state depending on the current mailbox state + **/ +STATIC void fm10k_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* just do a quick resysnc to start of frame */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* we cannot exit connect until the size is good */ + if (mbx->state == FM10K_STATE_OPEN) + mbx->state = FM10K_STATE_CONNECT; + else + mbx->state = FM10K_STATE_CLOSED; +} + +/** + * fm10k_mbx_process_connect - Process connect header + * @mbx: pointer to mailbox + * @msg: message array to process + * + * This function will read an incoming connect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_connect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 size, head; + + /* we will need to pull all of the fields for verification */ + size = FM10K_MSG_HDR_FIELD_GET(*hdr, CONNECT_SIZE); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* we cannot exit connect until the size is good */ + if (size > mbx->rx.size) { + mbx->max_size = mbx->rx.size - 1; + } else { + /* record the remote system requesting connection */ + mbx->state = FM10K_STATE_OPEN; + + fm10k_mbx_update_max_size(mbx, size); + } + break; + default: + break; + } + + /* align our tail index to remote head index */ + mbx->tail = head; + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_data - Process data header + * @mbx: pointer to mailbox + * + * This function will read an incoming data header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_data(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 err; + + DEBUGFUNC("fm10k_mbx_process_data"); + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, TAIL); + + /* if we are in connect just update our data and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + mbx->tail = head; + mbx->state = FM10K_STATE_OPEN; + } + + /* abort on message size errors */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* verify the checksum on the incoming data */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_disconnect - Process disconnect header + * @mbx: pointer to mailbox + * + * This function will read an incoming disconnect header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + const u32 *hdr = &mbx->mbx_hdr; + u16 head; + s32 err; + + /* we will need to pull the header field for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* We should not be receiving disconnect if Rx is incomplete */ + if (mbx->pushed) + return FM10K_MBX_ERR_TAIL; + + /* we have already verified mbx->head == tail so we know this is 0 */ + mbx->head_len = 0; + + /* verify the checksum on the incoming header is correct */ + err = fm10k_mbx_verify_remote_crc(mbx); + if (err) + return err; + + switch (state) { + case FM10K_STATE_DISCONNECT: + case FM10K_STATE_OPEN: + /* state doesn't change if we still have work to do */ + if (!fm10k_mbx_tx_complete(mbx)) + break; + + /* verify the head indicates we completed all transmits */ + if (head != mbx->tail) + return FM10K_MBX_ERR_HEAD; + + /* reset any in-progress work */ + fm10k_mbx_connect_reset(mbx); + break; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, head); +} + +/** + * fm10k_mbx_process_error - Process error header + * @mbx: pointer to mailbox + * + * This function will read an incoming error header and reply with the + * appropriate message. It will return a value indicating the number of + * data DWORDs on success, or will return a negative value on failure. + **/ +STATIC s32 fm10k_mbx_process_error(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + s32 err_no; + u16 head; + + /* we will need to pull all of the fields for verification */ + head = FM10K_MSG_HDR_FIELD_GET(*hdr, HEAD); + + /* we only have lower 10 bits of error number so add upper bits */ + err_no = FM10K_MSG_HDR_FIELD_GET(*hdr, ERR_NO); + err_no |= ~FM10K_MSG_HDR_MASK(ERR_NO); + + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* reset CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* reset tail index and size to prepare for reconnect */ + mbx->tail = head; + + /* if open then reset max_size and go back to connect */ + if (mbx->state == FM10K_STATE_OPEN) { + mbx->state = FM10K_STATE_CONNECT; + break; + } + + /* send a connect message to get data flowing again */ + fm10k_mbx_create_connect_hdr(mbx); + return FM10K_SUCCESS; + default: + break; + } + + return fm10k_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_mbx_process - Process mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* copy data from mailbox */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + /* validate type, source, and destination */ + err = fm10k_mbx_validate_msg_hdr(mbx); + if (err < 0) + goto msg_err; + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, TYPE)) { + case FM10K_MSG_CONNECT: + err = fm10k_mbx_process_connect(hw, mbx); + break; + case FM10K_MSG_DATA: + err = fm10k_mbx_process_data(hw, mbx); + break; + case FM10K_MSG_DISCONNECT: + err = fm10k_mbx_process_disconnect(hw, mbx); + break; + case FM10K_MSG_ERROR: + err = fm10k_mbx_process_error(hw, mbx); + break; + default: + err = FM10K_MBX_ERR_TYPE; + break; + } + +msg_err: + /* notify partner of errors on our end */ + if (err < 0) + fm10k_mbx_create_error_msg(mbx, err); + + /* copy data from mailbox */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_MBX(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + fm10k_mbx_connect_reset(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_MBX(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_connect - Start mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection. It will populate the + * mailbox with a broadcast connect message and then initialize the lock. + * This is safe since the connect message is a single DWORD so the mailbox + * transaction is guaranteed to be atomic. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + + /* initialize header of remote mailbox */ + fm10k_mbx_create_disconnect_hdr(mbx); + FM10K_WRITE_MBX(hw, mbx->mbmem_reg ^ mbx->mbmem_len, mbx->mbx_hdr); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_mbx_create_connect_hdr(mbx); + fm10k_mbx_write(hw, mbx); + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_validate_handlers - Validate layout of message parsing data + * @msg_data: handlers for mailbox events + * + * This function validates the layout of the message parsing data. This + * should be mostly static, but it is important to catch any errors that + * are made when constructing the parsers. + **/ +STATIC s32 fm10k_mbx_validate_handlers(const struct fm10k_msg_data *msg_data) +{ + const struct fm10k_tlv_attr *attr; + unsigned int id; + + DEBUGFUNC("fm10k_mbx_validate_handlers"); + + /* Allow NULL mailboxes that transmit but don't receive */ + if (!msg_data) + return FM10K_SUCCESS; + + while (msg_data->id != FM10K_TLV_ERROR) { + /* all messages should have a function handler */ + if (!msg_data->func) + return FM10K_ERR_PARAM; + + /* parser is optional */ + attr = msg_data->attr; + if (attr) { + while (attr->id != FM10K_TLV_ERROR) { + id = attr->id; + attr++; + /* ID should always be increasing */ + if (id >= attr->id) + return FM10K_ERR_PARAM; + /* ID should fit in results array */ + if (id >= FM10K_TLV_RESULTS_MAX) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if (attr->id != FM10K_TLV_ERROR) + return FM10K_ERR_PARAM; + } + + id = msg_data->id; + msg_data++; + /* ID should always be increasing */ + if (id >= msg_data->id) + return FM10K_ERR_PARAM; + } + + /* verify terminator is in the list */ + if ((msg_data->id != FM10K_TLV_ERROR) || !msg_data->func) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_mbx_register_handlers - Register a set of handler ops for mailbox + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function associates a set of message handling ops with a mailbox. + **/ +STATIC s32 fm10k_mbx_register_handlers(struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_mbx_register_handlers"); + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + return FM10K_SUCCESS; +} + +/** + * fm10k_pfvf_mbx_init - Initialize mailbox memory for PF/VF mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * @id: ID reference for PF as it supports up to 64 PF/VF mailboxes + * + * This function initializes the mailbox for use. It will split the + * buffer provided an use that th populate both the Tx and Rx FIFO by + * evenly splitting it. In order to allow for easy masking of head/tail + * the value reported in size must be a power of 2 and is reported in + * DWORDs, not bytes. Any invalid values will cause the mailbox to return + * error. + **/ +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data, u8 id) +{ + DEBUGFUNC("fm10k_pfvf_mbx_init"); + + /* initialize registers */ + switch (hw->mac.type) { + case fm10k_mac_vf: + mbx->mbx_reg = FM10K_VFMBX; + mbx->mbmem_reg = FM10K_VFMBMEM(FM10K_VFMBMEM_VF_XOR); + break; + case fm10k_mac_pf: + /* there are only 64 VF <-> PF mailboxes */ + if (id < 64) { + mbx->mbx_reg = FM10K_MBX(id); + mbx->mbmem_reg = FM10K_MBMEM_VF(id, 0); + break; + } + /* fallthough */ + default: + return FM10K_MBX_ERR_NO_MBX; + } + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* initialize CRC seeds */ + mbx->local = FM10K_MBX_CRC_SEED; + mbx->remote = FM10K_MBX_CRC_SEED; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_VFMBMEM_VF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_mbx_connect; + mbx->ops.disconnect = fm10k_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_create_data_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_data_hdr(struct fm10k_mbx_info *mbx) +{ + if (mbx->tail_len) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD); +} + +/** + * fm10k_sm_mbx_create_connect_hdr - Generate a mailbox header for local FIFO + * @mbx: pointer to mailbox + * @err: error flags to report if any + * + * This function returns a connection mailbox header + **/ +STATIC void fm10k_sm_mbx_create_connect_hdr(struct fm10k_mbx_info *mbx, u8 err) +{ + if (mbx->local) + mbx->mbx_lock |= FM10K_MBX_REQ; + + mbx->mbx_hdr = FM10K_MSG_HDR_FIELD_SET(mbx->tail, SM_TAIL) | + FM10K_MSG_HDR_FIELD_SET(mbx->remote, SM_VER) | + FM10K_MSG_HDR_FIELD_SET(mbx->head, SM_HEAD) | + FM10K_MSG_HDR_FIELD_SET(err, SM_ERR); +} + +/** + * fm10k_sm_mbx_connect_reset - Reset following request for reset + * @mbx: pointer to mailbox + * + * This function resets the mailbox to a just connected state + **/ +STATIC void fm10k_sm_mbx_connect_reset(struct fm10k_mbx_info *mbx) +{ + /* flush any uncompleted work */ + fm10k_mbx_reset_work(mbx); + + /* set local version to max and remote version to 0 */ + mbx->local = FM10K_SM_MBX_VERSION; + mbx->remote = 0; + + /* initialize tail and head */ + mbx->tail = 1; + mbx->head = 1; + + /* reset state back to connect */ + mbx->state = FM10K_STATE_CONNECT; +} + +/** + * fm10k_sm_mbx_connect - Start switch manager mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will initiate a mailbox connection with the switch + * manager. To do this it will first disconnect the mailbox, and then + * reconnect it in order to complete a reset of the mailbox. + * + * This function will return an error if the mailbox has not been initiated + * or is currently in use. + **/ +STATIC s32 fm10k_sm_mbx_connect(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx) +{ + DEBUGFUNC("fm10k_mbx_connect"); + + /* we cannot connect an uninitialized mailbox */ + if (!mbx->rx.buffer) + return FM10K_MBX_ERR_NO_SPACE; + + /* we cannot connect an already connected mailbox */ + if (mbx->state != FM10K_STATE_CLOSED) + return FM10K_MBX_ERR_BUSY; + + /* mailbox timeout can now become active */ + mbx->timeout = FM10K_MBX_INIT_TIMEOUT; + + /* Place mbx in ready to connect state */ + mbx->state = FM10K_STATE_CONNECT; + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + + /* reset interface back to connect */ + fm10k_sm_mbx_connect_reset(mbx); + + /* enable interrupt and notify other party of new message */ + mbx->mbx_lock = FM10K_MBX_REQ_INTERRUPT | FM10K_MBX_ACK_INTERRUPT | + FM10K_MBX_INTERRUPT_ENABLE; + + /* generate and load connect header into mailbox */ + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + fm10k_mbx_write(hw, mbx); + + /* enable interrupt and notify other party of new message */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_disconnect - Shutdown mailbox connection + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will shut down the mailbox. It places the mailbox first + * in the disconnect state, it then allows up to a predefined timeout for + * the mailbox to transition to close on its own. If this does not occur + * then the mailbox will be forced into the closed state. + * + * Any mailbox transactions not completed before calling this function + * are not guaranteed to complete and may be dropped. + **/ +STATIC void fm10k_sm_mbx_disconnect(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + int timeout = mbx->timeout ? FM10K_MBX_DISCONNECT_TIMEOUT : 0; + + DEBUGFUNC("fm10k_sm_mbx_disconnect"); + + /* Place mbx in ready to disconnect state */ + mbx->state = FM10K_STATE_DISCONNECT; + + /* trigger interrupt to start shutdown process */ + FM10K_WRITE_REG(hw, mbx->mbx_reg, FM10K_MBX_REQ | + FM10K_MBX_INTERRUPT_DISABLE); + do { + usec_delay(FM10K_MBX_POLL_DELAY); + mbx->ops.process(hw, mbx); + timeout -= FM10K_MBX_POLL_DELAY; + } while ((timeout > 0) && (mbx->state != FM10K_STATE_CLOSED)); + + /* in case we didn't close just force the mailbox into shutdown */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + fm10k_mbx_reset_work(mbx); + fm10k_mbx_update_max_size(mbx, 0); + + FM10K_WRITE_REG(hw, mbx->mbmem_reg, 0); +} + +/** + * fm10k_mbx_validate_fifo_hdr - Validate fields in the remote FIFO header + * @mbx: pointer to mailbox + * + * This function will parse up the fields in the mailbox header and return + * an error if the header contains any of a number of invalid configurations + * including unrecognized offsets or version numbers. + **/ +STATIC s32 fm10k_sm_mbx_validate_fifo_hdr(struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 tail, head, ver; + + DEBUGFUNC("fm10k_mbx_validate_msg_hdr"); + + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + ver = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_VER); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + switch (ver) { + case 0: + break; + case FM10K_SM_MBX_VERSION: + if (!head || head > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_HEAD; + if (!tail || tail > FM10K_SM_MBX_FIFO_LEN) + return FM10K_MBX_ERR_TAIL; + if (mbx->tail < head) + head += mbx->mbmem_len - 1; + if (tail < mbx->head) + tail += mbx->mbmem_len - 1; + if (fm10k_mbx_index_len(mbx, head, mbx->tail) > mbx->tail_len) + return FM10K_MBX_ERR_HEAD; + if (fm10k_mbx_index_len(mbx, mbx->head, tail) < mbx->mbmem_len) + break; + return FM10K_MBX_ERR_TAIL; + default: + return FM10K_MBX_ERR_SRC; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process_error - Process header with error flag set + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the error flag + * is set. As a result we will terminate a connection if one is present + * and fall back into the reset state with a connection header of version + * 0 (RESET). + **/ +STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* if there is an error just disconnect */ + mbx->remote = 0; + break; + case FM10K_STATE_OPEN: + /* flush any uncompleted work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* try connnecting at lower version */ + if (mbx->remote) { + while (mbx->local > 1) + mbx->local--; + mbx->remote = 0; + } + break; + default: + break; + } + + fm10k_sm_mbx_create_connect_hdr(mbx, 0); +} + +/** + * fm10k_sm_mbx_create_error_message - Process an error in FIFO hdr + * @mbx: pointer to mailbox + * @err: local error encountered + * + * This function will interpret the error provided by err, and based on + * that it may set the error bit in the local message header + **/ +STATIC void fm10k_sm_mbx_create_error_msg(struct fm10k_mbx_info *mbx, s32 err) +{ + /* only generate an error message for these types */ + switch (err) { + case FM10K_MBX_ERR_TAIL: + case FM10K_MBX_ERR_HEAD: + case FM10K_MBX_ERR_SRC: + case FM10K_MBX_ERR_SIZE: + case FM10K_MBX_ERR_RSVD0: + break; + default: + return; + } + + /* process it as though we received an error, and send error reply */ + fm10k_sm_mbx_process_error(mbx); + fm10k_sm_mbx_create_connect_hdr(mbx, 1); +} + +/** + * fm10k_sm_mbx_receive - Take message from Rx mailbox FIFO and put it in Rx + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Rx switch manager mailbox + * FIFO and place it in the Rx mailbox FIFO for processing by software. + **/ +STATIC s32 fm10k_sm_mbx_receive(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, + u16 tail) +{ + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_receive"); + + /* push tail in front of head */ + if (tail < mbx->head) + tail += mbmem_len; + + /* copy data to the Rx FIFO */ + err = fm10k_mbx_push_tail(hw, mbx, tail); + if (err < 0) + return err; + + /* process messages if we have received any */ + fm10k_mbx_dequeue_rx(hw, mbx); + + /* guarantee head aligns with the end of the last message */ + mbx->head = fm10k_mbx_head_sub(mbx, mbx->pushed); + mbx->pushed = 0; + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->head > mbmem_len) + mbx->head -= mbmem_len; + + return err; +} + +/** + * fm10k_sm_mbx_transmit - Take message from Tx and put it in Tx mailbox FIFO + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will dequeue one message from the Tx mailbox FIFO and place + * it in the Tx switch manager mailbox FIFO for processing by hardware. + **/ +STATIC void fm10k_sm_mbx_transmit(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + struct fm10k_mbx_fifo *fifo = &mbx->tx; + /* reduce length by 1 to convert to a mask */ + u16 mbmem_len = mbx->mbmem_len - 1; + u16 tail_len, len = 0; + u32 *msg; + + DEBUGFUNC("fm10k_sm_mbx_transmit"); + + /* push head behind tail */ + if (mbx->tail < head) + head += mbmem_len; + + fm10k_mbx_pull_head(hw, mbx, head); + + /* determine msg aligned offset for end of buffer */ + do { + msg = fifo->buffer + fm10k_fifo_head_offset(fifo, len); + tail_len = len; + len += FM10K_TLV_DWORD_LEN(*msg); + } while ((len <= mbx->tail_len) && (len < mbmem_len)); + + /* guarantee we stop on a message boundary */ + if (mbx->tail_len > tail_len) { + mbx->tail = fm10k_mbx_tail_sub(mbx, mbx->tail_len - tail_len); + mbx->tail_len = tail_len; + } + + /* clear any extra bits left over since index adds 1 extra bit */ + if (mbx->tail > mbmem_len) + mbx->tail -= mbmem_len; +} + +/** + * fm10k_sm_mbx_create_reply - Generate reply based on state and remote head + * @mbx: pointer to mailbox + * @head: acknowledgement number + * + * This function will generate an outgoing message based on the current + * mailbox state and the remote fifo head. It will return the length + * of the outgoing message excluding header on success, and a negative value + * on error. + **/ +STATIC void fm10k_sm_mbx_create_reply(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx, u16 head) +{ + switch (mbx->state) { + case FM10K_STATE_OPEN: + case FM10K_STATE_DISCONNECT: + /* flush out Tx data */ + fm10k_sm_mbx_transmit(hw, mbx, head); + + /* generate new header based on data */ + if (mbx->tail_len || (mbx->state == FM10K_STATE_OPEN)) { + fm10k_sm_mbx_create_data_hdr(mbx); + } else { + mbx->remote = 0; + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + } + break; + case FM10K_STATE_CONNECT: + case FM10K_STATE_CLOSED: + fm10k_sm_mbx_create_connect_hdr(mbx, 0); + break; + default: + break; + } +} + +/** + * fm10k_sm_mbx_process_reset - Process header with version == 0 (RESET) + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to respond to a request where the version data + * is set to 0. As such we will either terminate the connection or go + * into the connect state in order to re-establish the connection. This + * function can also be used to respond to an error as the connection + * resetting would also be a means of dealing with errors. + **/ +STATIC void fm10k_sm_mbx_process_reset(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const enum fm10k_mbx_state state = mbx->state; + + switch (state) { + case FM10K_STATE_DISCONNECT: + /* drop remote connections and disconnect */ + mbx->state = FM10K_STATE_CLOSED; + mbx->remote = 0; + mbx->local = 0; + break; + case FM10K_STATE_OPEN: + /* flush any incomplete work */ + fm10k_sm_mbx_connect_reset(mbx); + break; + case FM10K_STATE_CONNECT: + /* Update remote value to match local value */ + mbx->remote = mbx->local; + default: + break; + } + + fm10k_sm_mbx_create_reply(hw, mbx, mbx->tail); +} + +/** + * fm10k_sm_mbx_process_version_1 - Process header with version == 1 + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function is meant to process messages received when the remote + * mailbox is active. + **/ +STATIC s32 fm10k_sm_mbx_process_version_1(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + const u32 *hdr = &mbx->mbx_hdr; + u16 head, tail; + s32 len; + + /* pull all fields needed for verification */ + tail = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_TAIL); + head = FM10K_MSG_HDR_FIELD_GET(*hdr, SM_HEAD); + + /* if we are in connect and wanting version 1 then start up and go */ + if (mbx->state == FM10K_STATE_CONNECT) { + if (!mbx->remote) + goto send_reply; + if (mbx->remote != 1) + return FM10K_MBX_ERR_SRC; + + mbx->state = FM10K_STATE_OPEN; + } + + do { + /* abort on message size errors */ + len = fm10k_sm_mbx_receive(hw, mbx, tail); + if (len < 0) + return len; + + /* continue until we have flushed the Rx FIFO */ + } while (len); + +send_reply: + fm10k_sm_mbx_create_reply(hw, mbx, head); + + return FM10K_SUCCESS; +} + +/** + * fm10k_sm_mbx_process - Process mailbox switch mailbox interrupt + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * + * This function will process incoming mailbox events and generate mailbox + * replies. It will return a value indicating the number of DWORDs + * transmitted excluding header on success or a negative value on error. + **/ +STATIC s32 fm10k_sm_mbx_process(struct fm10k_hw *hw, + struct fm10k_mbx_info *mbx) +{ + s32 err; + + DEBUGFUNC("fm10k_sm_mbx_process"); + + /* we do not read mailbox if closed */ + if (mbx->state == FM10K_STATE_CLOSED) + return FM10K_SUCCESS; + + /* retrieve data from switch manager */ + err = fm10k_mbx_read(hw, mbx); + if (err) + return err; + + err = fm10k_sm_mbx_validate_fifo_hdr(mbx); + if (err < 0) + goto fifo_err; + + if (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_ERR)) { + fm10k_sm_mbx_process_error(mbx); + goto fifo_err; + } + + switch (FM10K_MSG_HDR_FIELD_GET(mbx->mbx_hdr, SM_VER)) { + case 0: + fm10k_sm_mbx_process_reset(hw, mbx); + break; + case FM10K_SM_MBX_VERSION: + err = fm10k_sm_mbx_process_version_1(hw, mbx); + break; + } + +fifo_err: + if (err < 0) + fm10k_sm_mbx_create_error_msg(mbx, err); + + /* report data to switch manager */ + fm10k_mbx_write(hw, mbx); + + return err; +} + +/** + * fm10k_sm_mbx_init - Initialize mailbox memory for PF/SM mailbox + * @hw: pointer to hardware structure + * @mbx: pointer to mailbox + * @msg_data: handlers for mailbox events + * + * This function for now is used to stub out the PF/SM mailbox + **/ +s32 fm10k_sm_mbx_init(struct fm10k_hw *hw, struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *msg_data) +{ + DEBUGFUNC("fm10k_sm_mbx_init"); + UNREFERENCED_1PARAMETER(hw); + + mbx->mbx_reg = FM10K_GMBX; + mbx->mbmem_reg = FM10K_MBMEM_PF(0); + + /* start out in closed state */ + mbx->state = FM10K_STATE_CLOSED; + + /* validate layout of handlers before assigning them */ + if (fm10k_mbx_validate_handlers(msg_data)) + return FM10K_ERR_PARAM; + + /* initialize the message handlers */ + mbx->msg_data = msg_data; + + /* start mailbox as timed out and let the reset_hw call + * set the timeout value to begin communications + */ + mbx->timeout = 0; + mbx->usec_delay = FM10K_MBX_INIT_DELAY; + + /* Split buffer for use by Tx/Rx FIFOs */ + mbx->max_size = FM10K_MBX_MSG_MAX_SIZE; + mbx->mbmem_len = FM10K_MBMEM_PF_XOR; + + /* initialize the FIFOs, sizes are in 4 byte increments */ + fm10k_fifo_init(&mbx->tx, mbx->buffer, FM10K_MBX_TX_BUFFER_SIZE); + fm10k_fifo_init(&mbx->rx, &mbx->buffer[FM10K_MBX_TX_BUFFER_SIZE], + FM10K_MBX_RX_BUFFER_SIZE); + + /* initialize function pointers */ + mbx->ops.connect = fm10k_sm_mbx_connect; + mbx->ops.disconnect = fm10k_sm_mbx_disconnect; + mbx->ops.rx_ready = fm10k_mbx_rx_ready; + mbx->ops.tx_ready = fm10k_mbx_tx_ready; + mbx->ops.tx_complete = fm10k_mbx_tx_complete; + mbx->ops.enqueue_tx = fm10k_mbx_enqueue_tx; + mbx->ops.process = fm10k_sm_mbx_process; + mbx->ops.register_handlers = fm10k_mbx_register_handlers; + + return FM10K_SUCCESS; +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_mbx.h b/lib/librte_pmd_fm10k/base/fm10k_mbx.h new file mode 100644 index 0000000..6332584 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_mbx.h @@ -0,0 +1,329 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_MBX_H_ +#define _FM10K_MBX_H_ + +/* forward declaration */ +struct fm10k_mbx_info; + +#include "fm10k_type.h" +#include "fm10k_tlv.h" + +/* PF Mailbox Registers */ +#define FM10K_MBMEM(_n) ((_n) + 0x18000) +#define FM10K_MBMEM_VF(_n, _m) (((_n) * 0x10) + (_m) + 0x18000) +#define FM10K_MBMEM_SM(_n) ((_n) + 0x18400) +#define FM10K_MBMEM_PF(_n) ((_n) + 0x18600) +/* XOR provides means of switching from Tx to Rx FIFO */ +#define FM10K_MBMEM_PF_XOR (FM10K_MBMEM_SM(0) ^ FM10K_MBMEM_PF(0)) +#define FM10K_MBX(_n) ((_n) + 0x18800) +#define FM10K_MBX_OWNER 0x00000001 +#define FM10K_MBX_REQ 0x00000002 +#define FM10K_MBX_ACK 0x00000004 +#define FM10K_MBX_REQ_INTERRUPT 0x00000008 +#define FM10K_MBX_ACK_INTERRUPT 0x00000010 +#define FM10K_MBX_INTERRUPT_ENABLE 0x00000020 +#define FM10K_MBX_INTERRUPT_DISABLE 0x00000040 +#define FM10K_MBICR(_n) ((_n) + 0x18840) +#define FM10K_GMBX 0x18842 + +/* VF Mailbox Registers */ +#define FM10K_VFMBX 0x00010 +#define FM10K_VFMBMEM(_n) ((_n) + 0x00020) +#define FM10K_VFMBMEM_LEN 16 +#define FM10K_VFMBMEM_VF_XOR (FM10K_VFMBMEM_LEN / 2) + +/* Delays/timeouts */ +#define FM10K_MBX_DISCONNECT_TIMEOUT 500 +#define FM10K_MBX_POLL_DELAY 19 +#define FM10K_MBX_INT_DELAY 20 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) + +/* PF/VF Mailbox state machine + * + * +----------+ connect() +----------+ + * | CLOSED | --------------> | CONNECT | + * +----------+ +----------+ + * ^ ^ | + * | rcv: rcv: | | rcv: + * | Connect Disconnect | | Connect + * | Disconnect Error | | Data + * | | | + * | | V + * +----------+ disconnect() +----------+ + * |DISCONNECT| <-------------- | OPEN | + * +----------+ +----------+ + * + * The diagram above describes the PF/VF mailbox state machine. There + * are four main states to this machine. + * Closed: This state represents a mailbox that is in a standby state + * with interrupts disabled. In this state the mailbox should not + * read the mailbox or write any data. The only means of exiting + * this state is for the system to make the connect() call for the + * mailbox, it will then transition to the connect state. + * Connect: In this state the mailbox is seeking a connection. It will + * post a connect message with no specified destination and will + * wait for a reply from the other side of the mailbox. This state + * is exited when either a connect with the local mailbox as the + * destination is received or when a data message is received with + * a valid sequence number. + * Open: In this state the mailbox is able to transfer data between the local + * entity and the remote. It will fall back to connect in the event of + * receiving either an error message, or a disconnect message. It will + * transition to disconnect on a call to disconnect(); + * Disconnect: In this state the mailbox is attempting to gracefully terminate + * the connection. It will do so at the first point where it knows + * that the remote endpoint is either done sending, or when the + * remote endpoint has fallen back into connect. + */ +enum fm10k_mbx_state { + FM10K_STATE_CLOSED, + FM10K_STATE_CONNECT, + FM10K_STATE_OPEN, + FM10K_STATE_DISCONNECT, +}; + +/* PF/VF Mailbox header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Size/Err_no/CRC | Rsvd0 | Head | Tail | Type | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The layout above describes the format for the header used in the PF/VF + * mailbox. The header is broken out into the following fields: + * Type: There are 4 supported message types + * 0x8: Data header - used to transport message data + * 0xC: Connect header - used to establish connection + * 0xD: Disconnect header - used to tear down a connection + * 0xE: Error header - used to address message exceptions + * Tail: Tail index for local FIFO + * Tail index actually consists of two parts. The MSB of + * the head is a loop tracker, it is 0 on an even numbered + * loop through the FIFO, and 1 on the odd numbered loops. + * To get the actual mailbox offset based on the tail it + * is necessary to add bit 3 to bit 0 and clear bit 3. This + * gives us a valid range of 0x1 - 0xE. + * Head: Head index for remote FIFO + * Head index follows the same format as the tail index. + * Rsvd0: Reserved 0 portion of the mailbox header + * CRC: Running CRC for all data since connect plus current message header + * Size: Maximum message size - Applies only to connect headers + * The maximum message size is provided during connect to avoid + * jamming the mailbox with messages that do not fit. + * Err_no: Error number - Applies only to error headers + * The error number provides a indication of the type of error + * experienced. + */ + +/* macros for retriving and setting header values */ +#define FM10K_MSG_HDR_MASK(name) \ + ((0x1u << FM10K_MSG_##name##_SIZE) - 1) +#define FM10K_MSG_HDR_FIELD_SET(value, name) \ + (((u32)(value) & FM10K_MSG_HDR_MASK(name)) << FM10K_MSG_##name##_SHIFT) +#define FM10K_MSG_HDR_FIELD_GET(value, name) \ + ((u16)((value) >> FM10K_MSG_##name##_SHIFT) & FM10K_MSG_HDR_MASK(name)) + +/* offsets shared between all headers */ +#define FM10K_MSG_TYPE_SHIFT 0 +#define FM10K_MSG_TYPE_SIZE 4 +#define FM10K_MSG_TAIL_SHIFT 4 +#define FM10K_MSG_TAIL_SIZE 4 +#define FM10K_MSG_HEAD_SHIFT 8 +#define FM10K_MSG_HEAD_SIZE 4 +#define FM10K_MSG_RSVD0_SHIFT 12 +#define FM10K_MSG_RSVD0_SIZE 4 + +/* offsets for data/disconnect headers */ +#define FM10K_MSG_CRC_SHIFT 16 +#define FM10K_MSG_CRC_SIZE 16 + +/* offsets for connect headers */ +#define FM10K_MSG_CONNECT_SIZE_SHIFT 16 +#define FM10K_MSG_CONNECT_SIZE_SIZE 16 + +/* offsets for error headers */ +#define FM10K_MSG_ERR_NO_SHIFT 16 +#define FM10K_MSG_ERR_NO_SIZE 16 + +enum fm10k_msg_type { + FM10K_MSG_DATA = 0x8, + FM10K_MSG_CONNECT = 0xC, + FM10K_MSG_DISCONNECT = 0xD, + FM10K_MSG_ERROR = 0xE, +}; + +/* HNI/SM Mailbox FIFO format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-------+-----------------------+-------+-----------------------+ + * | Error | Remote Head |Version| Local Tail | + * +-------+-----------------------+-------+-----------------------+ + * | | + * . Local FIFO Data . + * . . + * +-------+-----------------------+-------+-----------------------+ + * + * The layout above describes the format for the FIFOs used by the host + * network interface and the switch manager to communicate messages back + * and forth. Both the HNI and the switch maintain one such FIFO. The + * layout in memory has the switch manager FIFO followed immediately by + * the HNI FIFO. For this reason I am using just the pointer to the + * HNI FIFO in the mailbox ops as the offset between the two is fixed. + * + * The header for the FIFO is broken out into the following fields: + * Local Tail: Offset into FIFO region for next DWORD to write. + * Version: Version info for mailbox, only values of 0/1 are supported. + * Remote Head: Offset into remote FIFO to indicate how much we have read. + * Error: Error indication, values TBD. + */ + +/* version number for switch manager mailboxes */ +#define FM10K_SM_MBX_VERSION 1 +#define FM10K_SM_MBX_FIFO_LEN (FM10K_MBMEM_PF_XOR - 1) +#define FM10K_SM_MBX_FIFO_HDR_LEN 1 + +/* offsets shared between all SM FIFO headers */ +#define FM10K_MSG_SM_TAIL_SHIFT 0 +#define FM10K_MSG_SM_TAIL_SIZE 12 +#define FM10K_MSG_SM_VER_SHIFT 12 +#define FM10K_MSG_SM_VER_SIZE 4 +#define FM10K_MSG_SM_HEAD_SHIFT 16 +#define FM10K_MSG_SM_HEAD_SIZE 12 +#define FM10K_MSG_SM_ERR_SHIFT 28 +#define FM10K_MSG_SM_ERR_SIZE 4 + +/* All error messages returned by mailbox functions + * The value -511 is 0xFE01 in hex. The idea is to order the errors + * from 0xFE01 - 0xFEFF so error codes are easily visible in the mailbox + * messages. This also helps to avoid error number collisions as Linux + * doesn't appear to use error numbers 256 - 511. + */ +#define FM10K_MBX_ERR(_n) ((_n) - 512) +#define FM10K_MBX_ERR_NO_MBX FM10K_MBX_ERR(0x01) +#define FM10K_MBX_ERR_NO_MSG FM10K_MBX_ERR(0x02) +#define FM10K_MBX_ERR_NO_SPACE FM10K_MBX_ERR(0x03) +#define FM10K_MBX_ERR_LOCK FM10K_MBX_ERR(0x04) +#define FM10K_MBX_ERR_TAIL FM10K_MBX_ERR(0x05) +#define FM10K_MBX_ERR_HEAD FM10K_MBX_ERR(0x06) +#define FM10K_MBX_ERR_DST FM10K_MBX_ERR(0x07) +#define FM10K_MBX_ERR_SRC FM10K_MBX_ERR(0x08) +#define FM10K_MBX_ERR_TYPE FM10K_MBX_ERR(0x09) +#define FM10K_MBX_ERR_LEN FM10K_MBX_ERR(0x0A) +#define FM10K_MBX_ERR_SIZE FM10K_MBX_ERR(0x0B) +#define FM10K_MBX_ERR_BUSY FM10K_MBX_ERR(0x0C) +#define FM10K_MBX_ERR_VALUE FM10K_MBX_ERR(0x0D) +#define FM10K_MBX_ERR_RSVD0 FM10K_MBX_ERR(0x0E) +#define FM10K_MBX_ERR_CRC FM10K_MBX_ERR(0x0F) + +#define FM10K_MBX_CRC_SEED 0xFFFF + +struct fm10k_mbx_ops { + s32 (*connect)(struct fm10k_hw *, struct fm10k_mbx_info *); + void (*disconnect)(struct fm10k_hw *, struct fm10k_mbx_info *); + bool (*rx_ready)(struct fm10k_mbx_info *); + bool (*tx_ready)(struct fm10k_mbx_info *, u16); + bool (*tx_complete)(struct fm10k_mbx_info *); + s32 (*enqueue_tx)(struct fm10k_hw *, struct fm10k_mbx_info *, + const u32 *); + s32 (*process)(struct fm10k_hw *, struct fm10k_mbx_info *); + s32 (*register_handlers)(struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +}; + +struct fm10k_mbx_fifo { + u32 *buffer; + u16 head; + u16 tail; + u16 size; +}; + +/* size of buffer to be stored in mailbox for FIFOs */ +#define FM10K_MBX_TX_BUFFER_SIZE 512 +#define FM10K_MBX_RX_BUFFER_SIZE 128 +#define FM10K_MBX_BUFFER_SIZE \ + (FM10K_MBX_TX_BUFFER_SIZE + FM10K_MBX_RX_BUFFER_SIZE) + +/* minimum and maximum message size in dwords */ +#define FM10K_MBX_MSG_MAX_SIZE \ + ((FM10K_MBX_TX_BUFFER_SIZE - 1) & (FM10K_MBX_RX_BUFFER_SIZE - 1)) +#define FM10K_VFMBX_MSG_MTU ((FM10K_VFMBMEM_LEN / 2) - 1) + +#define FM10K_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ +#define FM10K_MBX_INIT_DELAY 500 /* microseconds between retries */ + +struct fm10k_mbx_info { + /* function pointers for mailbox operations */ + struct fm10k_mbx_ops ops; + const struct fm10k_msg_data *msg_data; + + /* message FIFOs */ + struct fm10k_mbx_fifo rx; + struct fm10k_mbx_fifo tx; + + /* delay for handling timeouts */ + u32 timeout; + u32 usec_delay; + + /* mailbox state info */ + u32 mbx_reg, mbmem_reg, mbx_lock, mbx_hdr; + u16 max_size, mbmem_len; + u16 tail, tail_len, pulled; + u16 head, head_len, pushed; + u16 local, remote; + enum fm10k_mbx_state state; + + /* result of last mailbox test */ + s32 test_result; + + /* statistics */ + u64 tx_busy; + u64 tx_dropped; + u64 tx_messages; + u64 tx_dwords; + u64 rx_messages; + u64 rx_dwords; + u64 rx_parse_err; + + /* Buffer to store messages */ + u32 buffer[FM10K_MBX_BUFFER_SIZE]; +}; + +s32 fm10k_pfvf_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *, u8); +s32 fm10k_sm_mbx_init(struct fm10k_hw *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); + +#endif /* _FM10K_MBX_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_osdep.h b/lib/librte_pmd_fm10k/base/fm10k_osdep.h new file mode 100644 index 0000000..04f8fe9 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_osdep.h @@ -0,0 +1,148 @@ +/******************************************************************************* + +Copyright (c) 2013-2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_OSDEP_H_ +#define _FM10K_OSDEP_H_ + +#include <stdint.h> +#include <string.h> +#include <rte_atomic.h> +#include <rte_byteorder.h> +#include <rte_cycles.h> +#include "../fm10k_logs.h" + +/* TODO: this does not look like it should be used... */ +#define ERROR_REPORT2(v1, v2, v3) do { } while (0) + +#define STATIC static +#define DEBUGFUNC(F) DEBUGOUT(F); +#define DEBUGOUT(S, args...) PMD_DRV_LOG_RAW(DEBUG, S, ##args) +#define DEBUGOUT1(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT2(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT3(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) +#define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) + +#define FALSE 0 +#define TRUE 1 +#ifndef false +#define false FALSE +#endif +#ifndef true +#define true TRUE +#endif + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef int16_t s16; +typedef uint32_t u32; +typedef int32_t s32; +typedef int64_t s64; +typedef uint64_t u64; +typedef int bool; + +#ifndef __le16 +#define __le16 u16 +#define __le32 u32 +#define __le64 u64 +#endif +#ifndef __be16 +#define __be16 u16 +#define __be32 u32 +#define __be64 u64 +#endif + +/* offsets are WORD offsets, not BYTE offsets */ +#define FM10K_WRITE_REG(hw, reg, val) \ + ((((volatile uint32_t *)(hw)->hw_addr)[(reg)]) = ((uint32_t)(val))) +#define FM10K_READ_REG(hw, reg) \ + (((volatile uint32_t *)(hw)->hw_addr)[(reg)]) +#define FM10K_WRITE_FLUSH(a) FM10K_READ_REG(a, FM10K_CTRL) + +#define FM10K_PCI_REG(reg) (*((volatile uint32_t *)(reg))) + +#define FM10K_PCI_REG_WRITE(reg, value) do { \ + FM10K_PCI_REG((reg)) = (value); \ +} while (0) + +/* not implemented */ +#define FM10K_READ_PCI_WORD(hw, reg) 0 + +#define FM10K_WRITE_MBX(hw, reg, value) FM10K_WRITE_REG(hw, reg, value) +#define FM10K_READ_MBX(hw, reg) FM10K_READ_REG(hw, reg) + +#define FM10K_LE16_TO_CPU rte_le_to_cpu_16 +#define FM10K_LE32_TO_CPU rte_le_to_cpu_32 +#define FM10K_CPU_TO_LE32 rte_cpu_to_le_32 +#define FM10K_CPU_TO_LE16 rte_cpu_to_le_16 + +#define FM10K_RMB rte_rmb +#define FM10K_WMB rte_wmb + +#define usec_delay rte_delay_us + +#define FM10K_REMOVED(hw_addr) (!(hw_addr)) + +#ifndef FM10K_IS_ZERO_ETHER_ADDR +/* make certain address is not 0 */ +#define FM10K_IS_ZERO_ETHER_ADDR(addr) \ +(!((addr)[0] | (addr)[1] | (addr)[2] | (addr)[3] | (addr)[4] | (addr)[5])) +#endif + +#ifndef FM10K_IS_MULTICAST_ETHER_ADDR +#define FM10K_IS_MULTICAST_ETHER_ADDR(addr) ((addr)[0] & 0x1) +#endif + +#ifndef FM10K_IS_VALID_ETHER_ADDR +/* make certain address is not multicast or 0 */ +#define FM10K_IS_VALID_ETHER_ADDR(addr) \ +(!FM10K_IS_MULTICAST_ETHER_ADDR(addr) && !FM10K_IS_ZERO_ETHER_ADDR(addr)) +#endif + +#ifndef do_div +#define do_div(n, base) ({\ + (n) = (n) / (base);\ +}) +#endif /* do_div */ + +/* DPDK can't access IOMEM directly */ +#ifndef FM10K_WRITE_SW_REG +#define FM10K_WRITE_SW_REG(v1, v2, v3) do { } while (0) +#endif + +#ifndef fm10k_read_reg +#define fm10k_read_reg FM10K_READ_REG +#endif + +#endif /* _FM10K_OSDEP_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.c b/lib/librte_pmd_fm10k/base/fm10k_pf.c new file mode 100644 index 0000000..3545a24 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.c @@ -0,0 +1,1992 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_pf.h" +#include "fm10k_vf.h" + +/** + * fm10k_reset_hw_pf - PF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after being powered on. + **/ +STATIC s32 fm10k_reset_hw_pf(struct fm10k_hw *hw) +{ + s32 err; + u32 reg; + u16 i; + + DEBUGFUNC("fm10k_reset_hw_pf"); + + /* Disable interrupts */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_DISABLE(ALL)); + + /* Lock ITR2 reg 0 into itself and disable interrupt moderation */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* We assume here Tx and Rx queue 0 are owned by the PF */ + + /* Shut off VF access to their queues forcing them to queue 0 */ + for (i = 0; i < FM10K_TQMAP_TABLE_SIZE; i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* shut down all rings */ + err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES); + if (err) + return err; + + /* Verify that DMA is no longer active */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL); + if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE)) + return FM10K_ERR_DMA_PENDING; + + /* verify the switch is ready for reset */ + reg = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* Inititate data path reset */ + reg |= FM10K_DMA_CTRL_DATAPATH_RESET; + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, reg); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Verify we made it out of reset */ + reg = FM10K_READ_REG(hw, FM10K_IP); + if (!(reg & FM10K_IP_NOTINRESET)) + err = FM10K_ERR_RESET_FAILED; + +out: + return err; +} + +/** + * fm10k_is_ari_hierarchy_pf - Indicate ARI hierarchy support + * @hw: pointer to hardware structure + * + * Looks at the ARI hierarchy bit to determine whether ARI is supported or not. + **/ +STATIC bool fm10k_is_ari_hierarchy_pf(struct fm10k_hw *hw) +{ + u16 sriov_ctrl = FM10K_READ_PCI_WORD(hw, FM10K_PCIE_SRIOV_CTRL); + + DEBUGFUNC("fm10k_is_ari_hierarchy_pf"); + + return !!(sriov_ctrl & FM10K_PCIE_SRIOV_CTRL_VFARI); +} + +/** + * fm10k_init_hw_pf - PF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_pf(struct fm10k_hw *hw) +{ + u32 dma_ctrl, txqctl; + u16 i; + + DEBUGFUNC("fm10k_init_hw_pf"); + + /* Establish default VSI as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(fm10k_dglort_default), 0); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(fm10k_dglort_default), + FM10K_DGLORTMAP_ANY); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), FM10K_DGLORTMAP_NONE); + + /* reset ITR2(0) to point to itself */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), 0); + + /* reset VF ITR2(0) to point to 0 avoid PF registers */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), 0); + + /* loop through all PF ITR2 registers pointing them to the previous */ + for (i = 1; i < FM10K_ITR_REG_COUNT_PF; i++) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + + /* Enable interrupt moderator if not already enabled */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); + + /* compute the default txqctl configuration */ + txqctl = FM10K_TXQCTL_PF | FM10K_TXQCTL_UNLIMITED_BW | + (hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT); + + for (i = 0; i < FM10K_MAX_QUEUES; i++) { + /* configure rings for 256 Queue / 32 Descriptor cache mode */ + FM10K_WRITE_REG(hw, FM10K_TQDLOC(i), + (i * FM10K_TQDLOC_BASE_32_DESC) | + FM10K_TQDLOC_SIZE_32_DESC); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + + /* configure rings to provide TPH processing hints */ + FM10K_WRITE_REG(hw, FM10K_TPH_TXCTRL(i), + FM10K_TPH_TXCTRL_DESC_TPHEN | + FM10K_TPH_TXCTRL_DESC_RROEN | + FM10K_TPH_TXCTRL_DESC_WROEN | + FM10K_TPH_TXCTRL_DATA_RROEN); + FM10K_WRITE_REG(hw, FM10K_TPH_RXCTRL(i), + FM10K_TPH_RXCTRL_DESC_TPHEN | + FM10K_TPH_RXCTRL_DESC_RROEN | + FM10K_TPH_RXCTRL_DATA_WROEN | + FM10K_TPH_RXCTRL_HDR_WROEN); + } + + /* set max hold interval to align with 1.024 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1; + break; + case fm10k_bus_speed_5000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2; + break; + case fm10k_bus_speed_8000: + dma_ctrl = FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3; + break; + default: + dma_ctrl = 0; + break; + } + + /* Configure TSO flags */ + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGL, FM10K_TSO_FLAGS_LOW); + FM10K_WRITE_REG(hw, FM10K_DTXTCPFLGH, FM10K_TSO_FLAGS_HI); + + /* Enable DMA engine + * Set Rx Descriptor size to 32 + * Set Minimum MSS to 64 + * Set Maximum number of Rx queues to 256 / 32 Descriptor + */ + dma_ctrl |= FM10K_DMA_CTRL_TX_ENABLE | FM10K_DMA_CTRL_RX_ENABLE | + FM10K_DMA_CTRL_RX_DESC_SIZE | FM10K_DMA_CTRL_MINMSS_64 | + FM10K_DMA_CTRL_32_DESC; + + FM10K_WRITE_REG(hw, FM10K_DMA_CTRL, dma_ctrl); + + /* record maximum queue count, we limit ourselves to 128 */ + hw->mac.max_queues = FM10K_MAX_QUEUES_PF; + + /* We support either 64 VFs or 7 VFs depending on if we have ARI */ + hw->iov.total_vfs = fm10k_is_ari_hierarchy_pf(hw) ? 64 : 7; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_pf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. + **/ +STATIC bool fm10k_is_slot_appropriate_pf(struct fm10k_hw *hw) +{ + DEBUGFUNC("fm10k_is_slot_appropriate_pf"); + + return (hw->bus.speed == hw->bus_caps.speed) && + (hw->bus.width == hw->bus_caps.width); +} + +/** + * fm10k_update_vlan_pf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Index indicating VF ID or PF ID in table + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for the corresponding function. In addition to the + * standard set/clear that supports one bit a multi-bit write is + * supported to set 64 bits at a time. + **/ +STATIC s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + u32 vlan_table, reg, mask, bit, len; + + /* verify the VSI index is valid */ + if (vsi > FM10K_VLAN_TABLE_VSI_MAX) + return FM10K_ERR_PARAM; + + /* VLAN multi-bit write: + * The multi-bit write has several parts to it. + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | RSVD0 | Length |C|RSVD0| VLAN ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * VLAN ID: Vlan Starting value + * RSVD0: Reserved section, must be 0 + * C: Flag field, 0 is set, 1 is clear (Used in VF VLAN message) + * Length: Number of times to repeat the bit being set + */ + len = vid >> 16; + vid = (vid << 17) >> 17; + + /* verify the reserved 0 fields are 0 */ + if (len >= FM10K_VLAN_TABLE_VID_MAX || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* Loop through the table updating all required VLANs */ + for (reg = FM10K_VLAN_TABLE(vsi, vid / 32), bit = vid % 32; + len < FM10K_VLAN_TABLE_VID_MAX; + len -= 32 - bit, reg++, bit = 0) { + /* record the initial state of the register */ + vlan_table = FM10K_READ_REG(hw, reg); + + /* truncate mask if we are at the start or end of the run */ + mask = (~(u32)0 >> ((len < 31) ? 31 - len : 0)) << bit; + + /* make necessary modifications to the register */ + mask &= set ? ~vlan_table : vlan_table; + if (mask) + FM10K_WRITE_REG(hw, reg, vlan_table ^ mask); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_pf - Read device MAC address + * @hw: pointer to the HW structure + * + * Reads the device MAC address from the SM_AREA and stores the value. + **/ +STATIC s32 fm10k_read_mac_addr_pf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 serial_num; + int i; + + DEBUGFUNC("fm10k_read_mac_addr_pf"); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(1)); + + /* last byte should be all 1's */ + if ((~serial_num) << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(serial_num >> 24); + perm_addr[1] = (u8)(serial_num >> 16); + perm_addr[2] = (u8)(serial_num >> 8); + + serial_num = FM10K_READ_REG(hw, FM10K_SM_AREA(0)); + + /* first byte should be all 1's */ + if ((~serial_num) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(serial_num >> 16); + perm_addr[4] = (u8)(serial_num >> 8); + perm_addr[5] = (u8)(serial_num); + + for (i = 0; i < ETH_ALEN; i++) { + hw->mac.perm_addr[i] = perm_addr[i]; + hw->mac.addr[i] = perm_addr[i]; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_glort_valid_pf - Validate that the provided glort is valid + * @hw: pointer to the HW structure + * @glort: base glort to be validated + * + * This function will return an error if the provided glort is invalid + **/ +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort) +{ + glort &= hw->mac.dglort_map >> FM10K_DGLORTMAP_MASK_SHIFT; + + return glort == (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE); +} + +/** + * fm10k_update_xc_addr_pf - Update device addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function generates a message to the Switch API requesting + * that the given logical port add/remove the given L2 MAC/VLAN address. + **/ +STATIC s32 fm10k_update_xc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + struct fm10k_mac_update mac_update; + u32 msg[5]; + + DEBUGFUNC("fm10k_update_xc_addr_pf"); + + /* if glort or VLAN are not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort) || vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record fields */ + mac_update.mac_lower = FM10K_CPU_TO_LE32(((u32)mac[2] << 24) | + ((u32)mac[3] << 16) | + ((u32)mac[4] << 8) | + ((u32)mac[5])); + mac_update.mac_upper = FM10K_CPU_TO_LE16(((u32)mac[0] << 8) | + ((u32)mac[1])); + mac_update.vlan = FM10K_CPU_TO_LE16(vid); + mac_update.glort = FM10K_CPU_TO_LE16(glort); + mac_update.action = add ? 0 : 1; + mac_update.flags = flags; + + /* populate mac_update fields */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE); + fm10k_tlv_attr_put_le_struct(msg, FM10K_PF_ATTR_ID_MAC_UPDATE, + &mac_update, sizeof(mac_update)); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_uc_addr_pf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure + * + * This function is used to add or remove unicast addresses for + * the PF. + **/ +STATIC s32 fm10k_update_uc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + DEBUGFUNC("fm10k_update_uc_addr_pf"); + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, flags); +} + +/** + * fm10k_update_mc_addr_pf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the PF. + **/ +STATIC s32 fm10k_update_mc_addr_pf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + DEBUGFUNC("fm10k_update_mc_addr_pf"); + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + return fm10k_update_xc_addr_pf(hw, glort, mac, vid, add, 0); +} + +/** + * fm10k_update_xcast_mode_pf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_pf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], xcast_mode; + + DEBUGFUNC("fm10k_update_xcast_mode_pf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write xcast mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + xcast_mode = ((u32)mode << 16) | glort; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_XCAST_MODES); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_XCAST_MODE, xcast_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_pf - Update interrupt moderator linked list + * @hw: pointer to hardware structure + * + * This function walks through the MSI-X vector table to determine the + * number of active interrupts and based on that information updates the + * interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_pf(struct fm10k_hw *hw) +{ + u32 i; + + /* Disable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, 0); + + /* loop through PF from last to first looking enabled vectors */ + for (i = FM10K_ITR_REG_COUNT_PF - 1; i; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* always reset VFITR2[0] to point to last enabled PF vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(FM10K_ITR_REG_COUNT_PF), i); + + /* reset ITR2[0] to point to last enabled PF vector */ + if (!hw->iov.num_vfs) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + + /* Enable interrupt moderator */ + FM10K_WRITE_REG(hw, FM10K_INT_CTRL, FM10K_INT_CTRL_ENABLEMODERATOR); +} + +/** + * fm10k_update_lport_state_pf - Notify the switch of a change in port state + * @hw: pointer to the HW structure + * @glort: base resource tag for this request + * @count: number of logical ports being updated + * @enable: boolean value indicating enable or disable + * + * This function is used to add/remove a logical port from the switch. + **/ +STATIC s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], lport_msg; + + DEBUGFUNC("fm10k_lport_state_pf"); + + /* do nothing if we are being asked to create or destroy 0 ports */ + if (!count) + return FM10K_SUCCESS; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* construct the lport message from the 2 pieces of data we have */ + lport_msg = ((u32)count << 16) | glort; + + /* generate lport create/delete message */ + fm10k_tlv_msg_init(msg, enable ? FM10K_PF_MSG_ID_LPORT_CREATE : + FM10K_PF_MSG_ID_LPORT_DELETE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_PORT, lport_msg); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_configure_dglort_map_pf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + u16 glort, queue_count, vsi_count, pc_count; + u16 vsi, queue, pc, q_idx; + u32 txqctl, dglortdec, dglortmap; + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* verify the dglort values */ + if ((dglort->idx > 7) || (dglort->rss_l > 7) || (dglort->pc_l > 3) || + (dglort->vsi_l > 6) || (dglort->vsi_b > 64) || + (dglort->queue_l > 8) || (dglort->queue_b >= 256)) + return FM10K_ERR_PARAM; + + /* determine count of VSIs and queues */ + queue_count = 1 << (dglort->rss_l + dglort->pc_l); + vsi_count = 1 << (dglort->vsi_l + dglort->queue_l); + glort = dglort->glort; + q_idx = dglort->queue_b; + + /* configure SGLORT for queues */ + for (vsi = 0; vsi < vsi_count; vsi++, glort++) { + for (queue = 0; queue < queue_count; queue++, q_idx++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(q_idx), glort); + FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(q_idx), glort); + } + } + + /* determine count of PCs and queues */ + queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l); + pc_count = 1 << dglort->pc_l; + + /* configure PC for Tx queues */ + for (pc = 0; pc < pc_count; pc++) { + q_idx = pc + dglort->queue_b; + for (queue = 0; queue < queue_count; queue++) { + if (q_idx >= FM10K_MAX_QUEUES) + break; + + txqctl = FM10K_READ_REG(hw, FM10K_TXQCTL(q_idx)); + txqctl &= ~FM10K_TXQCTL_PC_MASK; + txqctl |= pc << FM10K_TXQCTL_PC_SHIFT; + FM10K_WRITE_REG(hw, FM10K_TXQCTL(q_idx), txqctl); + + q_idx += pc_count; + } + } + + /* configure DGLORTDEC */ + dglortdec = ((u32)(dglort->rss_l) << FM10K_DGLORTDEC_RSSLENGTH_SHIFT) | + ((u32)(dglort->queue_b) << FM10K_DGLORTDEC_QBASE_SHIFT) | + ((u32)(dglort->pc_l) << FM10K_DGLORTDEC_PCLENGTH_SHIFT) | + ((u32)(dglort->vsi_b) << FM10K_DGLORTDEC_VSIBASE_SHIFT) | + ((u32)(dglort->vsi_l) << FM10K_DGLORTDEC_VSILENGTH_SHIFT) | + ((u32)(dglort->queue_l)); + if (dglort->inner_rss) + dglortdec |= FM10K_DGLORTDEC_INNERRSS_ENABLE; + + /* configure DGLORTMAP */ + dglortmap = (dglort->idx == fm10k_dglort_default) ? + FM10K_DGLORTMAP_ANY : FM10K_DGLORTMAP_ZERO; + dglortmap <<= dglort->vsi_l + dglort->queue_l + dglort->shared_l; + dglortmap |= dglort->glort; + + /* write values to hardware */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(dglort->idx), dglortdec); + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(dglort->idx), dglortmap); + + return FM10K_SUCCESS; +} + +u16 fm10k_queues_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 2 : (num_pools > 16) ? 4 : (num_pools > 8) ? + 8 : FM10K_MAX_QUEUES_POOL; +} + +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 num_vfs = hw->iov.num_vfs; + u16 vf_q_idx = FM10K_MAX_QUEUES; + + vf_q_idx -= fm10k_queues_per_pool(hw) * (num_vfs - vf_idx); + + return vf_q_idx; +} + +STATIC u16 fm10k_vectors_per_pool(struct fm10k_hw *hw) +{ + u16 num_pools = hw->iov.num_pools; + + return (num_pools > 32) ? 8 : (num_pools > 16) ? 16 : + FM10K_MAX_VECTORS_POOL; +} + +STATIC u16 fm10k_vf_vector_index(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx = FM10K_MAX_VECTORS_PF; + + vf_v_idx += fm10k_vectors_per_pool(hw) * vf_idx; + + return vf_v_idx; +} + +/** + * fm10k_iov_assign_resources_pf - Assign pool resources for virtualization + * @hw: pointer to the HW structure + * @num_vfs: number of VFs to be allocated + * @num_pools: number of virtualization pools to be allocated + * + * Allocates queues and traffic classes to virtualization entities to prepare + * the PF for SR-IOV and VMDq + **/ +STATIC s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs, + u16 num_pools) +{ + u16 qmap_stride, qpp, vpp, vf_q_idx, vf_q_idx0, qmap_idx; + u32 vid = hw->mac.default_vid << FM10K_TXQCTL_VID_SHIFT; + int i, j; + + /* hardware only supports up to 64 pools */ + if (num_pools > 64) + return FM10K_ERR_PARAM; + + /* the number of VFs cannot exceed the number of pools */ + if ((num_vfs > num_pools) || (num_vfs > hw->iov.total_vfs)) + return FM10K_ERR_PARAM; + + /* record number of virtualization entities */ + hw->iov.num_vfs = num_vfs; + hw->iov.num_pools = num_pools; + + /* determine qmap offsets and counts */ + qmap_stride = (num_vfs > 8) ? 32 : 256; + qpp = fm10k_queues_per_pool(hw); + vpp = fm10k_vectors_per_pool(hw); + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, 0); + qmap_idx = 0; + + /* establish TCs with -1 credits and no quanta to prevent transmit */ + for (i = 0; i < num_vfs; i++) { + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(i), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(i), + FM10K_TC_CREDIT_CREDIT_MASK); + } + + /* zero out all mbmem registers */ + for (i = FM10K_VFMBMEM_LEN * num_vfs; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM(i), 0); + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(0), ~0); + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(1), ~0); + + /* loop through unallocated rings assigning them back to PF */ + for (i = FM10K_MAX_QUEUES_PF; i < vf_q_idx; i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), FM10K_TXQCTL_PF | vid); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), FM10K_RXQCTL_PF); + } + + /* PF should have already updated VFITR2[0] */ + + /* update all ITR registers to flow to VFITR2[0] */ + for (i = FM10K_ITR_REG_COUNT_PF + 1; i < FM10K_ITR_REG_COUNT; i++) { + if (!(i & (vpp - 1))) + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - vpp); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(i), i - 1); + } + + /* update PF ITR2[0] to reference the last vector */ + FM10K_WRITE_REG(hw, FM10K_ITR2(0), + fm10k_vf_vector_index(hw, num_vfs - 1)); + + /* loop through rings populating rings and TCs */ + for (i = 0; i < num_vfs; i++) { + /* record index for VF queue 0 for use in end of loop */ + vf_q_idx0 = vf_q_idx; + + for (j = 0; j < qpp; j++, qmap_idx++, vf_q_idx++) { + /* assign VF and locked TC to queues */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx), + (i << FM10K_TXQCTL_TC_SHIFT) | i | + FM10K_TXQCTL_VF | vid); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(vf_q_idx), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(vf_q_idx), + FM10K_RXQCTL_VF | + (i << FM10K_RXQCTL_VF_SHIFT)); + + /* map queue pair to VF */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx); + } + + /* repeat the first ring for all of the remaining VF rings */ + for (; j < qmap_stride; j++, qmap_idx++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), vf_q_idx0); + } + } + + /* loop through remaining indexes assigning all to queue 0 */ + while (qmap_idx < FM10K_TQMAP_TABLE_SIZE) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx), 0); + qmap_idx++; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_configure_tc_pf - Configure the shaping group for VF + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * @rate: Rate indicated in Mb/s + * + * Configured the TC for a given VF to allow only up to a given number + * of Mb/s of outgoing Tx throughput. + **/ +STATIC s32 fm10k_iov_configure_tc_pf(struct fm10k_hw *hw, u16 vf_idx, int rate) +{ + /* configure defaults */ + u32 interval = FM10K_TC_RATE_INTERVAL_4US_GEN3; + u32 tc_rate = FM10K_TC_RATE_QUANTA_MASK; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* set interval to align with 4.096 usec in all modes */ + switch (hw->bus.speed) { + case fm10k_bus_speed_2500: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN1; + break; + case fm10k_bus_speed_5000: + interval = FM10K_TC_RATE_INTERVAL_4US_GEN2; + break; + default: + break; + } + + if (rate) { + if (rate > FM10K_VF_TC_MAX || rate < FM10K_VF_TC_MIN) + return FM10K_ERR_PARAM; + + /* The quanta is measured in Bytes per 4.096 or 8.192 usec + * The rate is provided in Mbits per second + * To tralslate from rate to quanta we need to multiply the + * rate by 8.192 usec and divide by 8 bits/byte. To avoid + * dealing with floating point we can round the values up + * to the nearest whole number ratio which gives us 128 / 125. + */ + tc_rate = (rate * 128) / 125; + + /* try to keep the rate limiting accurate by increasing + * the number of credits and interval for rates less than 4Gb/s + */ + if (rate < 4000) + interval <<= 1; + else + tc_rate >>= 1; + } + + /* update rate limiter with new values */ + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), tc_rate | interval); + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), FM10K_TC_MAXCREDIT_64K); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_int_moderator_pf - Add VF interrupts to moderator list + * @hw: pointer to the HW structure + * @vf_idx: index of VF receiving GLORT + * + * Update the interrupt moderator linked list to include any MSI-X + * interrupts which the VF has enabled in the MSI-X vector table. + **/ +STATIC s32 fm10k_iov_assign_int_moderator_pf(struct fm10k_hw *hw, u16 vf_idx) +{ + u16 vf_v_idx, vf_v_limit, i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* search for first vector that is not masked */ + for (i = vf_v_limit - 1; i > vf_v_idx; i--) { + if (!FM10K_READ_REG(hw, FM10K_MSIX_VECTOR_MASK(i))) + break; + } + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), i); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), i); + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_assign_default_mac_vlan_pf - Assign a MAC and VLAN to VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Assign a MAC address and default VLAN to a VF and notify it of the update + **/ +STATIC s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, timeout, qmap_idx, i; + u32 msg[4], txdctl, txqctl, tdbal = 0, tdbah = 0; + s32 err = FM10K_SUCCESS; + u16 vf_idx, vf_vid; + + /* verify vf is in range */ + if (!vf_info || vf_info->vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + + /* calculate starting index for queues */ + vf_idx = vf_info->vf_idx; + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + qmap_idx = qmap_stride * vf_idx; + + /* MAP Tx queue back to 0 temporarily, and disable it */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(vf_q_idx), 0); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR; + else + vf_vid = vf_info->sw_vid; + + /* generate MAC_ADDR request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + vf_info->mac, vf_vid); + + /* load onto outgoing mailbox, ignore any errors on enqueue */ + if (vf_info->mbx.ops.enqueue_tx) + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + + /* verify ring has disabled before modifying base address registers */ + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + for (timeout = 0; txdctl & FM10K_TXDCTL_ENABLE; timeout++) { + /* limit ourselves to a 1ms timeout */ + if (timeout == 10) { + err = FM10K_ERR_DMA_PENDING; + goto err_out; + } + + usec_delay(100); + txdctl = FM10K_READ_REG(hw, FM10K_TXDCTL(vf_q_idx)); + } + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* Record the base address into queue 0 */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx), tdbah); + +err_out: + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) & + FM10K_TXQCTL_VID_MASK; + txqctl |= (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + + /* assign VID */ + for (i = 0; i < queues_per_pool; i++) + FM10K_WRITE_REG(hw, FM10K_TXQCTL(vf_q_idx + i), txqctl); + + /* restore the queue back to VF ownership */ + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx), vf_q_idx); + return err; +} + +/** + * fm10k_iov_reset_resources_pf - Reassign queues and interrupts to a VF + * @hw: pointer to the HW structure + * @vf_info: pointer to VF information structure + * + * Reassign the interrupts and queues to a VF following an FLR + **/ +STATIC s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u16 qmap_stride, queues_per_pool, vf_q_idx, qmap_idx; + u32 tdbal = 0, tdbah = 0, txqctl, rxqctl; + u16 vf_v_idx, vf_v_limit, vf_vid; + u8 vf_idx = vf_info->vf_idx; + int i; + + /* verify vf is in range */ + if (vf_idx >= hw->iov.num_vfs) + return FM10K_ERR_PARAM; + + /* clear event notification of VF FLR */ + FM10K_WRITE_REG(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32)); + + /* force timeout and then disconnect the mailbox */ + vf_info->mbx.timeout = 0; + if (vf_info->mbx.ops.disconnect) + vf_info->mbx.ops.disconnect(hw, &vf_info->mbx); + + /* determine vector offset and count */ + vf_v_idx = fm10k_vf_vector_index(hw, vf_idx); + vf_v_limit = vf_v_idx + fm10k_vectors_per_pool(hw); + + /* determine qmap offsets and counts */ + qmap_stride = (hw->iov.num_vfs > 8) ? 32 : 256; + queues_per_pool = fm10k_queues_per_pool(hw); + qmap_idx = qmap_stride * vf_idx; + + /* make all the queues inaccessible to the VF */ + for (i = qmap_idx; i < (qmap_idx + qmap_stride); i++) { + FM10K_WRITE_REG(hw, FM10K_TQMAP(i), 0); + FM10K_WRITE_REG(hw, FM10K_RQMAP(i), 0); + } + + /* calculate starting index for queues */ + vf_q_idx = fm10k_vf_queue_index(hw, vf_idx); + + /* determine correct default VLAN ID */ + if (vf_info->pf_vid) + vf_vid = vf_info->pf_vid; + else + vf_vid = vf_info->sw_vid; + + /* configure Queue control register */ + txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) | + (vf_idx << FM10K_TXQCTL_TC_SHIFT) | + FM10K_TXQCTL_VF | vf_idx; + rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT); + + /* stop further DMA and reset queue ownership back to VF */ + for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) { + FM10K_WRITE_REG(hw, FM10K_TXDCTL(i), 0); + FM10K_WRITE_REG(hw, FM10K_TXQCTL(i), txqctl); + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), + FM10K_RXDCTL_WRITE_BACK_MIN_DELAY | + FM10K_RXDCTL_DROP_ON_EMPTY); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(i), rxqctl); + } + + /* reset TC with -1 credits and no quanta to prevent transmit */ + FM10K_WRITE_REG(hw, FM10K_TC_MAXCREDIT(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_RATE(vf_idx), 0); + FM10K_WRITE_REG(hw, FM10K_TC_CREDIT(vf_idx), + FM10K_TC_CREDIT_CREDIT_MASK); + + /* update our first entry in the table based on previous VF */ + if (!vf_idx) + hw->mac.ops.update_int_moderator(hw); + else + hw->iov.ops.assign_int_moderator(hw, vf_idx - 1); + + /* reset linked list so it now includes our active vectors */ + if (vf_idx == (hw->iov.num_vfs - 1)) + FM10K_WRITE_REG(hw, FM10K_ITR2(0), vf_v_idx); + else + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_limit), vf_v_idx); + + /* link remaining vectors so that next points to previous */ + for (vf_v_idx++; vf_v_idx < vf_v_limit; vf_v_idx++) + FM10K_WRITE_REG(hw, FM10K_ITR2(vf_v_idx), vf_v_idx - 1); + + /* zero out MBMEM, VLAN_TABLE, RETA, RSSRK, and MRQC registers */ + for (i = FM10K_VFMBMEM_LEN; i--;) + FM10K_WRITE_REG(hw, FM10K_MBMEM_VF(vf_idx, i), 0); + for (i = FM10K_VLAN_TABLE_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_VLAN_TABLE(vf_info->vsi, i), 0); + for (i = FM10K_RETA_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RETA(vf_info->vsi, i), 0); + for (i = FM10K_RSSRK_SIZE; i--;) + FM10K_WRITE_REG(hw, FM10K_RSSRK(vf_info->vsi, i), 0); + FM10K_WRITE_REG(hw, FM10K_MRQC(vf_info->vsi), 0); + + /* Update base address registers to contain MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac)) { + tdbal = (((u32)vf_info->mac[3]) << 24) | + (((u32)vf_info->mac[4]) << 16) | + (((u32)vf_info->mac[5]) << 8); + tdbah = (((u32)0xFF) << 24) | + (((u32)vf_info->mac[0]) << 16) | + (((u32)vf_info->mac[1]) << 8) | + ((u32)vf_info->mac[2]); + } + + /* map queue pairs back to VF from last to first */ + for (i = queues_per_pool; i--;) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(vf_q_idx + i), tdbal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(vf_q_idx + i), tdbah); + FM10K_WRITE_REG(hw, FM10K_TQMAP(qmap_idx + i), vf_q_idx + i); + FM10K_WRITE_REG(hw, FM10K_RQMAP(qmap_idx + i), vf_q_idx + i); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_set_lport_pf - Assign and enable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * @lport_idx: Logical port offset from the hardware glort + * @flags: Set of capability flags to extend port beyond basic functionality + * + * This function allows enabling a VF port by assigning it a GLORT and + * setting the flags so that it can enable an Rx mode. + **/ +STATIC s32 fm10k_iov_set_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u16 lport_idx, u8 flags) +{ + u16 glort = (hw->mac.dglort_map + lport_idx) & FM10K_DGLORTMAP_NONE; + + DEBUGFUNC("fm10k_iov_set_lport_state_pf"); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + vf_info->vf_flags = flags | FM10K_VF_FLAG_NONE_CAPABLE; + vf_info->glort = glort; + + return FM10K_SUCCESS; +} + +/** + * fm10k_iov_reset_lport_pf - Disable a logical port for a given VF + * @hw: pointer to hardware structure + * @vf_info: pointer to VF information structure + * + * This function disables a VF port by stripping it of a GLORT and + * setting the flags so that it cannot enable any Rx mode. + **/ +STATIC void fm10k_iov_reset_lport_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info) +{ + u32 msg[1]; + + DEBUGFUNC("fm10k_iov_reset_lport_state_pf"); + + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) { + /* notify switch that this port has been disabled */ + fm10k_update_lport_state_pf(hw, vf_info->glort, 1, false); + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); + } + + /* clear flags and glort if it exists */ + vf_info->vf_flags = 0; + vf_info->glort = 0; +} + +/** + * fm10k_iov_update_stats_pf - Updates hardware related statistics for VFs + * @hw: pointer to hardware structure + * @q: stats for all queues of a VF + * @vf_idx: index of VF + * + * This function collects queue stats for VFs. + **/ +STATIC void fm10k_iov_update_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats_q *q, + u16 vf_idx) +{ + u32 idx, qpp; + + /* get stats for all of the queues */ + qpp = fm10k_queues_per_pool(hw); + idx = fm10k_vf_queue_index(hw, vf_idx); + fm10k_update_hw_stats_q(hw, q, idx, qpp); +} + +STATIC s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw, + struct fm10k_vf_info *vf_info, + u64 timestamp) +{ + u32 msg[4]; + + /* generate port state response to notify VF it is not ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588); + fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp); + + return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg); +} + +/** + * fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MSI-X requests from the VF. The + * assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u8 vf_idx = vf_info->vf_idx; + + UNREFERENCED_1PARAMETER(results); + DEBUGFUNC("fm10k_iov_msg_msix_pf"); + + return hw->iov.ops.assign_int_moderator(hw, vf_idx); +} + +/** + * fm10k_iov_msg_mac_vlan_pf - Message handler for MAC/VLAN request from VF + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for MAC/VLAN requests from the VF. + * The assumption is that in this case it is acceptable to just directly + * hand off the message from the VF to the underlying shared code. + **/ +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + int err = FM10K_SUCCESS; + u8 mac[ETH_ALEN]; + u32 *result; + u16 vlan; + u32 vid; + + DEBUGFUNC("fm10k_iov_msg_mac_vlan_pf"); + + /* we shouldn't be updating rules on a disabled interface */ + if (!FM10K_VF_FLAG_ENABLED(vf_info)) + err = FM10K_ERR_PARAM; + + if (!err && !!results[FM10K_MAC_VLAN_MSG_VLAN]) { + result = results[FM10K_MAC_VLAN_MSG_VLAN]; + + /* record VLAN id requested */ + err = fm10k_tlv_attr_get_u32(result, &vid); + if (err) + return err; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vid || (vid == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vid |= vf_info->pf_vid; + else + vid |= vf_info->sw_vid; + } else if (vid != vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* update VSI info for VF in regards to VLAN table */ + err = hw->mac.ops.update_vlan(hw, vid, vf_info->vsi, + !(vid & FM10K_VLAN_CLEAR)); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MAC]) { + result = results[FM10K_MAC_VLAN_MSG_MAC]; + + /* record unicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* block attempts to set MAC for a locked device */ + if (FM10K_IS_VALID_ETHER_ADDR(vf_info->mac) && + memcmp(mac, vf_info->mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new unicast address */ + err = hw->mac.ops.update_uc_addr(hw, vf_info->glort, mac, vlan, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + if (!err && !!results[FM10K_MAC_VLAN_MSG_MULTICAST]) { + result = results[FM10K_MAC_VLAN_MSG_MULTICAST]; + + /* record multicast MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan(result, mac, &vlan); + if (err) + return err; + + /* verify that the VF is allowed to request multicast */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_MULTI_ENABLED)) + return FM10K_ERR_PARAM; + + /* if VLAN ID is 0, set the default VLAN ID instead of 0 */ + if (!vlan || (vlan == FM10K_VLAN_CLEAR)) { + if (vf_info->pf_vid) + vlan |= vf_info->pf_vid; + else + vlan |= vf_info->sw_vid; + } else if (vf_info->pf_vid) { + return FM10K_ERR_PARAM; + } + + /* notify switch of request for new multicast address */ + err = hw->mac.ops.update_mc_addr(hw, vf_info->glort, mac, + !(vlan & FM10K_VLAN_CLEAR), 0); + } + + return err; +} + +/** + * fm10k_iov_supported_xcast_mode_pf - Determine best match for xcast mode + * @vf_info: VF info structure containing capability flags + * @mode: Requested xcast mode + * + * This function outputs the mode that most closely matches the requested + * mode. If not modes match it will request we disable the port + **/ +STATIC u8 fm10k_iov_supported_xcast_mode_pf(struct fm10k_vf_info *vf_info, + u8 mode) +{ + u8 vf_flags = vf_info->vf_flags; + + /* match up mode to capabilities as best as possible */ + switch (mode) { + case FM10K_XCAST_MODE_PROMISC: + if (vf_flags & FM10K_VF_FLAG_PROMISC_CAPABLE) + return FM10K_XCAST_MODE_PROMISC; + /* fallthough */ + case FM10K_XCAST_MODE_ALLMULTI: + if (vf_flags & FM10K_VF_FLAG_ALLMULTI_CAPABLE) + return FM10K_XCAST_MODE_ALLMULTI; + /* fallthough */ + case FM10K_XCAST_MODE_MULTI: + if (vf_flags & FM10K_VF_FLAG_MULTI_CAPABLE) + return FM10K_XCAST_MODE_MULTI; + /* fallthough */ + case FM10K_XCAST_MODE_NONE: + if (vf_flags & FM10K_VF_FLAG_NONE_CAPABLE) + return FM10K_XCAST_MODE_NONE; + /* fallthough */ + default: + break; + } + + /* disable interface as it should not be able to request any */ + return FM10K_XCAST_MODE_DISABLE; +} + +/** + * fm10k_iov_msg_lport_state_pf - Message handler for port state requests + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Pointer to mailbox information structure + * + * This function is a default handler for port state requests. The port + * state requests for now are basic and consist of enabling or disabling + * the port. + **/ +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_vf_info *vf_info = (struct fm10k_vf_info *)mbx; + u32 *result; + s32 err = FM10K_SUCCESS; + u32 msg[2]; + u8 mode = 0; + + DEBUGFUNC("fm10k_iov_msg_lport_state_pf"); + + /* verify VF is allowed to enable even minimal mode */ + if (!(vf_info->vf_flags & FM10K_VF_FLAG_NONE_CAPABLE)) + return FM10K_ERR_PARAM; + + if (!!results[FM10K_LPORT_STATE_MSG_XCAST_MODE]) { + result = results[FM10K_LPORT_STATE_MSG_XCAST_MODE]; + + /* XCAST mode update requested */ + err = fm10k_tlv_attr_get_u8(result, &mode); + if (err) + return FM10K_ERR_PARAM; + + /* prep for possible demotion depending on capabilities */ + mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode); + + /* if mode is not currently enabled, enable it */ + if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode))) + fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode); + + /* swap mode back to a bit flag */ + mode = FM10K_VF_FLAG_SET_MODE(mode); + } else if (!results[FM10K_LPORT_STATE_MSG_DISABLE]) { + /* need to disable the port if it is already enabled */ + if (FM10K_VF_FLAG_ENABLED(vf_info)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, + 1, false); + + /* when enabling the port we should reset the rate limiters */ + hw->iov.ops.configure_tc(hw, vf_info->vf_idx, vf_info->rate); + + /* set mode for minimal functionality */ + mode = FM10K_VF_FLAG_SET_MODE_NONE; + + /* generate port state response to notify VF it is ready */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_READY); + mbx->ops.enqueue_tx(hw, mbx, msg); + } + + /* if enable state toggled note the update */ + if (!err && (!FM10K_VF_FLAG_ENABLED(vf_info) != !mode)) + err = fm10k_update_lport_state_pf(hw, vf_info->glort, 1, + !!mode); + + /* if state change succeeded, then update our stored state */ + mode |= FM10K_VF_FLAG_CAPABLE(vf_info); + if (!err) + vf_info->vf_flags = mode; + + return err; +} + +const struct fm10k_msg_data fm10k_iov_msg_data_pf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MSIX_HANDLER(fm10k_iov_msg_msix_pf), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_iov_msg_mac_vlan_pf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_iov_msg_lport_state_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_update_stats_hw_pf - Updates hardware related statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function collects and aggregates global and per queue hardware + * statistics. + **/ +STATIC void fm10k_update_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + u32 timeout, ur, ca, um, xec, vlan_drop, loopback_drop, nodesc_drop; + u32 id, id_prev; + + DEBUGFUNC("fm10k_update_hw_stats_pf"); + + /* Use Tx queue 0 as a canary to detect a reset */ + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + + /* Read Global Statistics */ + do { + timeout = fm10k_read_hw_stats_32b(hw, FM10K_STATS_TIMEOUT, + &stats->timeout); + ur = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UR, &stats->ur); + ca = fm10k_read_hw_stats_32b(hw, FM10K_STATS_CA, &stats->ca); + um = fm10k_read_hw_stats_32b(hw, FM10K_STATS_UM, &stats->um); + xec = fm10k_read_hw_stats_32b(hw, FM10K_STATS_XEC, &stats->xec); + vlan_drop = fm10k_read_hw_stats_32b(hw, FM10K_STATS_VLAN_DROP, + &stats->vlan_drop); + loopback_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_LOOPBACK_DROP, + &stats->loopback_drop); + nodesc_drop = fm10k_read_hw_stats_32b(hw, + FM10K_STATS_NODESC_DROP, + &stats->nodesc_drop); + + /* if value has not changed then we have consistent data */ + id_prev = id; + id = FM10K_READ_REG(hw, FM10K_TXQCTL(0)); + } while ((id ^ id_prev) & FM10K_TXQCTL_ID_MASK); + + /* drop non-ID bits and set VALID ID bit */ + id &= FM10K_TXQCTL_ID_MASK; + id |= FM10K_STAT_VALID; + + /* Update Global Statistics */ + if (stats->stats_idx == id) { + stats->timeout.count += timeout; + stats->ur.count += ur; + stats->ca.count += ca; + stats->um.count += um; + stats->xec.count += xec; + stats->vlan_drop.count += vlan_drop; + stats->loopback_drop.count += loopback_drop; + stats->nodesc_drop.count += nodesc_drop; + } + + /* Update bases and record current PF id */ + fm10k_update_hw_base_32b(&stats->timeout, timeout); + fm10k_update_hw_base_32b(&stats->ur, ur); + fm10k_update_hw_base_32b(&stats->ca, ca); + fm10k_update_hw_base_32b(&stats->um, um); + fm10k_update_hw_base_32b(&stats->xec, xec); + fm10k_update_hw_base_32b(&stats->vlan_drop, vlan_drop); + fm10k_update_hw_base_32b(&stats->loopback_drop, loopback_drop); + fm10k_update_hw_base_32b(&stats->nodesc_drop, nodesc_drop); + stats->stats_idx = id; + + /* Update Queue Statistics */ + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_pf - Resets base for hardware statistics of PF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for global and per queue hardware + * statistics. + **/ +STATIC void fm10k_rebind_hw_stats_pf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_pf"); + + /* Unbind Global Statistics */ + fm10k_unbind_hw_stats_32b(&stats->timeout); + fm10k_unbind_hw_stats_32b(&stats->ur); + fm10k_unbind_hw_stats_32b(&stats->ca); + fm10k_unbind_hw_stats_32b(&stats->um); + fm10k_unbind_hw_stats_32b(&stats->xec); + fm10k_unbind_hw_stats_32b(&stats->vlan_drop); + fm10k_unbind_hw_stats_32b(&stats->loopback_drop); + fm10k_unbind_hw_stats_32b(&stats->nodesc_drop); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_pf(hw, stats); +} + +/** + * fm10k_set_dma_mask_pf - Configures PhyAddrSpace to limit DMA to system + * @hw: pointer to hardware structure + * @dma_mask: 64 bit DMA mask required for platform + * + * This function sets the PHYADDR.PhyAddrSpace bits for the endpoint in order + * to limit the access to memory beyond what is physically in the system. + **/ +STATIC void fm10k_set_dma_mask_pf(struct fm10k_hw *hw, u64 dma_mask) +{ + /* we need to write the upper 32 bits of DMA mask to PhyAddrSpace */ + u32 phyaddr = (u32)(dma_mask >> 32); + + DEBUGFUNC("fm10k_set_dma_mask_pf"); + + FM10K_WRITE_REG(hw, FM10K_PHYADDR, phyaddr); +} + +/** + * fm10k_get_fault_pf - Record a fault in one of the interface units + * @hw: pointer to hardware structure + * @type: pointer to fault type register offset + * @fault: pointer to memory location to record the fault + * + * Record the fault register contents to the fault data structure and + * clear the entry from the register. + * + * Returns ERR_PARAM if invalid register is specified or no error is present. + **/ +STATIC s32 fm10k_get_fault_pf(struct fm10k_hw *hw, int type, + struct fm10k_fault *fault) +{ + u32 func; + + DEBUGFUNC("fm10k_get_fault_pf"); + + /* verify the fault register is in range and is aligned */ + switch (type) { + case FM10K_PCA_FAULT: + case FM10K_THI_FAULT: + case FM10K_FUM_FAULT: + break; + default: + return FM10K_ERR_PARAM; + } + + /* only service faults that are valid */ + func = FM10K_READ_REG(hw, type + FM10K_FAULT_FUNC); + if (!(func & FM10K_FAULT_FUNC_VALID)) + return FM10K_ERR_PARAM; + + /* read remaining fields */ + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_HI); + fault->address <<= 32; + fault->address = FM10K_READ_REG(hw, type + FM10K_FAULT_ADDR_LO); + fault->specinfo = FM10K_READ_REG(hw, type + FM10K_FAULT_SPECINFO); + + /* clear valid bit to allow for next error */ + FM10K_WRITE_REG(hw, type + FM10K_FAULT_FUNC, FM10K_FAULT_FUNC_VALID); + + /* Record which function triggered the error */ + if (func & FM10K_FAULT_FUNC_PF) + fault->func = 0; + else + fault->func = 1 + ((func & FM10K_FAULT_FUNC_VF_MASK) >> + FM10K_FAULT_FUNC_VF_SHIFT); + + /* record fault type */ + fault->type = func & FM10K_FAULT_FUNC_TYPE_MASK; + + return FM10K_SUCCESS; +} + +/** + * fm10k_request_lport_map_pf - Request LPORT map from the switch API + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + DEBUGFUNC("fm10k_request_lport_pf"); + + /* issue request asking for LPORT map */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_LPORT_MAP); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_get_host_state_pf - Returns the state of the switch and mailbox + * @hw: pointer to hardware structure + * @switch_ready: pointer to boolean value that will record switch state + * + * This funciton will check the DMA_CTRL2 register and mailbox in order + * to determine if the switch is ready for the PF to begin requesting + * addresses and mapping traffic to the local interface. + **/ +STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready) +{ + s32 ret_val = FM10K_SUCCESS; + u32 dma_ctrl2; + + DEBUGFUNC("fm10k_get_host_state_pf"); + + /* verify the switch is ready for interaction */ + dma_ctrl2 = FM10K_READ_REG(hw, FM10K_DMA_CTRL2); + if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY)) + goto out; + + /* retrieve generic host state info */ + ret_val = fm10k_get_host_state_generic(hw, switch_ready); + if (ret_val) + goto out; + + /* interface cannot receive traffic without logical ports */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) + ret_val = fm10k_request_lport_map_pf(hw); + +out: + return ret_val; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_map_pf - Message handler for lport_map message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the lport mapping based on the reply from the + * switch API. + **/ +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, mask; + u32 dglort_map; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_map_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_LPORT_MAP], + &dglort_map); + if (err) + return err; + + /* extract values out of the header */ + glort = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_GLORT); + mask = FM10K_MSG_HDR_FIELD_GET(dglort_map, LPORT_MAP_MASK); + + /* verify mask is set and none of the masked bits in glort are set */ + if (!mask || (glort & ~mask)) + return FM10K_ERR_PARAM; + + /* verify the mask is contiguous, and that it is 1's followed by 0's */ + if (((~(mask - 1) & mask) + mask) & FM10K_DGLORTMAP_NONE) + return FM10K_ERR_PARAM; + + /* record the glort, mask, and port count */ + hw->mac.dglort_map = dglort_map; + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_UPDATE_PVID), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_update_pvid_pf - Message handler for port VLAN message from SM + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler configures the default VLAN for the PF + **/ +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u16 glort, pvid; + u32 pvid_update; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_update_pvid_pf"); + + err = fm10k_tlv_attr_get_u32(results[FM10K_PF_ATTR_ID_UPDATE_PVID], + &pvid_update); + if (err) + return err; + + /* extract values from the pvid update */ + glort = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_GLORT); + pvid = FM10K_MSG_HDR_FIELD_GET(pvid_update, UPDATE_PVID_PVID); + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* verify VID is valid */ + if (pvid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* record the port VLAN ID value */ + hw->mac.default_vid = pvid; + + return FM10K_SUCCESS; +} + +/** + * fm10k_record_global_table_data - Move global table data to swapi table info + * @from: pointer to source table data structure + * @to: pointer to destination table info structure + * + * This function is will copy table_data to the table_info contained in + * the hw struct. + **/ +static void fm10k_record_global_table_data(struct fm10k_global_table_data *from, + struct fm10k_swapi_table_info *to) +{ + /* convert from le32 struct to CPU byte ordered values */ + to->used = FM10K_LE32_TO_CPU(from->used); + to->avail = FM10K_LE32_TO_CPU(from->avail); +} + +const struct fm10k_tlv_attr fm10k_err_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR, + sizeof(struct fm10k_swapi_error)), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_err_pf - Message handler for error reply + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler will capture the data for any error replies to previous + * messages that the PF has sent. + **/ +s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + struct fm10k_swapi_error err_msg; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_err_pf"); + + /* extract structure from message */ + err = fm10k_tlv_attr_get_le_struct(results[FM10K_PF_ATTR_ID_ERR], + &err_msg, sizeof(err_msg)); + if (err) + return err; + + /* record table status */ + fm10k_record_global_table_data(&err_msg.mac, &hw->swapi.mac); + fm10k_record_global_table_data(&err_msg.nexthop, &hw->swapi.nexthop); + fm10k_record_global_table_data(&err_msg.ffu, &hw->swapi.ffu); + + /* record SW API status value */ + hw->swapi.status = FM10K_LE32_TO_CPU(err_msg.status); + + return FM10K_SUCCESS; +} + +const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = { + FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP, + sizeof(struct fm10k_swapi_1588_timestamp)), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_request_tx_timestamp_mode_pf - Request a specific Tx timestamping mode + * @hw: pointer to hardware structure + * @glort: base resource tag for this request + * @mode: integer value indicating the requested mode + * + * This function will attempt to request a specific timestamp mode for the + * port so that it can receive Tx timestamp messages. + **/ +STATIC s32 fm10k_request_tx_timestamp_mode_pf(struct fm10k_hw *hw, + u16 glort, + u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3], timestamp_mode; + + DEBUGFUNC("fm10k_request_timestamp_mode_pf"); + + if (mode > FM10K_TIMESTAMP_MODE_PEP_TO_ANY) + return FM10K_ERR_PARAM; + + /* if glort is not valid return error */ + if (!fm10k_glort_valid_pf(hw, glort)) + return FM10K_ERR_PARAM; + + /* write timestamp mode as a single u32 value, + * lower 16 bits: glort + * upper 16 bits: mode + */ + timestamp_mode = ((u32)mode << 16) | glort; + + /* generate message requesting change to xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE); + fm10k_tlv_attr_put_u32(msg, FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ, timestamp_mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_adjust_systime_pf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function will adjust the SYSTIME_CFG register contained in BAR 4 + * if this function is supported for BAR 4 access. The adjustment amount + * is based on the parts per billion value provided and adjusted to a + * value based on parts per 2^48 clock cycles. + * + * If adjustment is not supported or the requested value is too large + * we will return an error. + **/ +STATIC s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb) +{ + u64 systime_adjust; + + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* if sw_addr is not set we don't have switch register access */ + if (!hw->sw_addr) + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; + + /* we must convert the value from parts per billion to parts per + * 2^48 cycles. In addition I have opted to only use the 30 most + * significant bits of the adjustment value as the 8 least + * significant bits are located in another register and represent + * a value significantly less than a part per billion, the result + * of dropping the 8 least significant bits is that the adjustment + * value is effectively multiplied by 2^8 when we write it. + * + * As a result of all this the math for this breaks down as follows: + * ppb / 10^9 == adjust * 2^8 / 2^48 + * If we solve this for adjust, and simplify it comes out as: + * ppb * 2^31 / 5^9 == adjust + */ + systime_adjust = (ppb < 0) ? -ppb : ppb; + systime_adjust <<= 31; + do_div(systime_adjust, 1953125); + + /* verify the requested adjustment value is in range */ + if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK) + return FM10K_ERR_PARAM; + + if (ppb < 0) + systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE; + + FM10K_WRITE_SW_REG(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_pf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanosecods. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_pf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_SYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_pf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for PF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_pf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + struct fm10k_iov_info *iov = &hw->iov; + + DEBUGFUNC("fm10k_init_ops_pf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_pf; + mac->ops.init_hw = &fm10k_init_hw_pf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_generic; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_pf; + mac->ops.update_vlan = &fm10k_update_vlan_pf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_pf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_pf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_pf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_pf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_pf; + mac->ops.update_lport_state = &fm10k_update_lport_state_pf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_pf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_pf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_pf; + mac->ops.set_dma_mask = &fm10k_set_dma_mask_pf; + mac->ops.get_fault = &fm10k_get_fault_pf; + mac->ops.get_host_state = &fm10k_get_host_state_pf; + mac->ops.adjust_systime = &fm10k_adjust_systime_pf; + mac->ops.read_systime = &fm10k_read_systime_pf; + mac->ops.request_tx_timestamp_mode = &fm10k_request_tx_timestamp_mode_pf; + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + iov->ops.assign_resources = &fm10k_iov_assign_resources_pf; + iov->ops.configure_tc = &fm10k_iov_configure_tc_pf; + iov->ops.assign_int_moderator = &fm10k_iov_assign_int_moderator_pf; + iov->ops.assign_default_mac_vlan = fm10k_iov_assign_default_mac_vlan_pf; + iov->ops.reset_resources = &fm10k_iov_reset_resources_pf; + iov->ops.set_lport = &fm10k_iov_set_lport_pf; + iov->ops.reset_lport = &fm10k_iov_reset_lport_pf; + iov->ops.update_stats = &fm10k_iov_update_stats_pf; + iov->ops.report_timestamp = &fm10k_iov_report_timestamp_pf; + + return fm10k_sm_mbx_init(hw, &hw->mbx, fm10k_msg_data_pf); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_pf.h b/lib/librte_pmd_fm10k/base/fm10k_pf.h new file mode 100644 index 0000000..f6c290a --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_pf.h @@ -0,0 +1,155 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_PF_H_ +#define _FM10K_PF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +bool fm10k_glort_valid_pf(struct fm10k_hw *hw, u16 glort); +u16 fm10k_queues_per_pool(struct fm10k_hw *hw); +u16 fm10k_vf_queue_index(struct fm10k_hw *hw, u16 vf_idx); + +enum fm10k_pf_tlv_msg_id_v1 { + FM10K_PF_MSG_ID_TEST = 0x000, /* msg ID reserved */ + FM10K_PF_MSG_ID_XCAST_MODES = 0x001, + FM10K_PF_MSG_ID_UPDATE_MAC_FWD_RULE = 0x002, + FM10K_PF_MSG_ID_LPORT_MAP = 0x100, + FM10K_PF_MSG_ID_LPORT_CREATE = 0x200, + FM10K_PF_MSG_ID_LPORT_DELETE = 0x201, + FM10K_PF_MSG_ID_CONFIG = 0x300, + FM10K_PF_MSG_ID_UPDATE_PVID = 0x400, + FM10K_PF_MSG_ID_CREATE_FLOW_TABLE = 0x501, + FM10K_PF_MSG_ID_DELETE_FLOW_TABLE = 0x502, + FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503, + FM10K_PF_MSG_ID_DELETE_FLOW = 0x504, + FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505, + FM10K_PF_MSG_ID_GET_1588_INFO = 0x506, + FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701, + FM10K_PF_MSG_ID_TX_TIMESTAMP_MODE = 0x702, +}; + +enum fm10k_pf_tlv_attr_id_v1 { + FM10K_PF_ATTR_ID_ERR = 0x00, + FM10K_PF_ATTR_ID_LPORT_MAP = 0x01, + FM10K_PF_ATTR_ID_XCAST_MODE = 0x02, + FM10K_PF_ATTR_ID_MAC_UPDATE = 0x03, + FM10K_PF_ATTR_ID_VLAN_UPDATE = 0x04, + FM10K_PF_ATTR_ID_CONFIG = 0x05, + FM10K_PF_ATTR_ID_CREATE_FLOW_TABLE = 0x06, + FM10K_PF_ATTR_ID_DELETE_FLOW_TABLE = 0x07, + FM10K_PF_ATTR_ID_UPDATE_FLOW = 0x08, + FM10K_PF_ATTR_ID_FLOW_STATE = 0x09, + FM10K_PF_ATTR_ID_FLOW_HANDLE = 0x0A, + FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B, + FM10K_PF_ATTR_ID_PORT = 0x0C, + FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D, + FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_REQ = 0x11, + FM10K_PF_ATTR_ID_TIMESTAMP_MODE_RESP = 0x12, +}; + +#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0 +#define FM10K_MSG_LPORT_MAP_GLORT_SIZE 16 +#define FM10K_MSG_LPORT_MAP_MASK_SHIFT 16 +#define FM10K_MSG_LPORT_MAP_MASK_SIZE 16 + +#define FM10K_MSG_UPDATE_PVID_GLORT_SHIFT 0 +#define FM10K_MSG_UPDATE_PVID_GLORT_SIZE 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16 +#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16 + +struct fm10k_mac_update { + __le32 mac_lower; + __le16 mac_upper; + __le16 vlan; + __le16 glort; + u8 flags; + u8 action; +}; + +struct fm10k_global_table_data { + __le32 used; + __le32 avail; +}; + +struct fm10k_swapi_error { + __le32 status; + struct fm10k_global_table_data mac; + struct fm10k_global_table_data nexthop; + struct fm10k_global_table_data ffu; +}; + +struct fm10k_swapi_1588_timestamp { + __le64 egress; + __le64 ingress; + __le16 dglort; + __le16 sglort; +}; + +#define FM10K_PF_MSG_LPORT_CREATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_CREATE, NULL, func) +#define FM10K_PF_MSG_LPORT_DELETE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_DELETE, NULL, func) +s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[]; +#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_LPORT_MAP, \ + fm10k_lport_map_msg_attr, func) +s32 fm10k_msg_update_pvid_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_update_pvid_msg_attr[]; +#define FM10K_PF_MSG_UPDATE_PVID_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_UPDATE_PVID, \ + fm10k_update_pvid_msg_attr, func) + +s32 fm10k_msg_err_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_err_msg_attr[]; +#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[]; +#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \ + fm10k_1588_timestamp_msg_attr, func) + +s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_msg_data fm10k_iov_msg_data_pf[]; + +s32 fm10k_init_ops_pf(struct fm10k_hw *hw); +#endif /* _FM10K_PF_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.c b/lib/librte_pmd_fm10k/base/fm10k_tlv.c new file mode 100644 index 0000000..1d9d7d8 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.c @@ -0,0 +1,914 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_tlv.h" + +/** + * fm10k_tlv_msg_init - Initialize message block for TLV data storage + * @msg: Pointer to message block + * @msg_id: Message ID indicating message type + * + * This function return success if provided with a valid message pointer + **/ +s32 fm10k_tlv_msg_init(u32 *msg, u16 msg_id) +{ + DEBUGFUNC("fm10k_tlv_msg_init"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + *msg = (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT) | msg_id; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_null_string - Place null terminated string on message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @string: Pointer to string to be stored in attribute + * + * This function will reorder a string to be CPU endian and store it in + * the attribute buffer. It will return success if provided with a valid + * pointers. + **/ +s32 fm10k_tlv_attr_put_null_string(u32 *msg, u16 attr_id, + const unsigned char *string) +{ + u32 attr_data = 0, len = 0; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_null_string"); + + /* verify pointers are not NULL */ + if (!string || !msg) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy string into local variable and then write to msg */ + do { + /* write data to message */ + if (len && !(len % 4)) { + attr[len / 4] = attr_data; + attr_data = 0; + } + + /* record character to offset location */ + attr_data |= (u32)(*string) << (8 * (len % 4)); + len++; + + /* test for NULL and then increment */ + } while (*(string++)); + + /* write last piece of data to message */ + attr[(len + 3) / 4] = attr_data; + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_null_string - Get null terminated string from attribute + * @attr: Pointer to attribute + * @string: Pointer to location of destination string + * + * This function pulls the string back out of the attribute and will place + * it in the array pointed by by string. It will return success if provided + * with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_null_string(u32 *attr, unsigned char *string) +{ + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_get_null_string"); + + /* verify pointers are not NULL */ + if (!string || !attr) + return FM10K_ERR_PARAM; + + len = *attr >> FM10K_TLV_LEN_SHIFT; + attr++; + + while (len--) + string[len] = (u8)(attr[len / 4] >> (8 * (len % 4))); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_mac_vlan - Store MAC/VLAN attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @mac_addr: MAC address to be stored + * + * This function will reorder a MAC address to be CPU endian and store it + * in the attribute buffer. It will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_mac_vlan(u32 *msg, u16 attr_id, + const u8 *mac_addr, u16 vlan) +{ + u32 len = ETH_ALEN << FM10K_TLV_LEN_SHIFT; + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_mac_vlan"); + + /* verify pointers are not NULL */ + if (!msg || !mac_addr) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* record attribute header, update message length */ + attr[0] = len | attr_id; + + /* copy value into local variable and then write to msg */ + attr[1] = FM10K_LE32_TO_CPU(*(const __le32 *)&mac_addr[0]); + attr[2] = FM10K_LE16_TO_CPU(*(const __le16 *)&mac_addr[4]); + attr[2] |= (u32)vlan << 16; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_mac_vlan - Get MAC/VLAN stored in attribute + * @attr: Pointer to attribute + * @attr_id: Attribute ID + * @mac_addr: location of buffer to store MAC address + * + * This function pulls the MAC address back out of the attribute and will + * place it in the array pointed by by mac_addr. It will return success + * if provided with a valid pointers. + **/ +s32 fm10k_tlv_attr_get_mac_vlan(u32 *attr, u8 *mac_addr, u16 *vlan) +{ + DEBUGFUNC("fm10k_tlv_attr_get_mac_vlan"); + + /* verify pointers are not NULL */ + if (!mac_addr || !attr) + return FM10K_ERR_PARAM; + + *(__le32 *)&mac_addr[0] = FM10K_CPU_TO_LE32(attr[1]); + *(__le16 *)&mac_addr[4] = FM10K_CPU_TO_LE16((u16)(attr[2])); + *vlan = (u16)(attr[2] >> 16); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_bool - Add header indicating value "true" + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will simply add an attribute header, the fact + * that the header is here means the attribute value is true, else + * it is false. The function will return success if provided with a + * valid pointers. + **/ +s32 fm10k_tlv_attr_put_bool(u32 *msg, u16 attr_id) +{ + DEBUGFUNC("fm10k_tlv_attr_put_bool"); + + /* verify pointers are not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* record attribute header */ + msg[FM10K_TLV_DWORD_LEN(*msg)] = attr_id; + + /* add header length to length */ + *msg += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_value - Store integer value attribute in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @value: Value to be written + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in a message attribute. The function will return success provided + * that msg is a valid pointer, and len is 1, 2, 4, or 8. + **/ +s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_put_value"); + + /* verify non-null msg and len is 1, 2, 4, or 8 */ + if (!msg || !len || len > 8 || (len & (len - 1))) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + if (len < 4) { + attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1); + } else { + attr[1] = (u32)value; + if (len > 4) + attr[2] = (u32)(value >> 32); + } + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_value - Get integer value stored in attribute + * @attr: Pointer to attribute + * @value: Pointer to destination buffer + * @len: Size of value + * + * This function will place an integer value of up to 8 bytes in size + * in the offset pointed to by value. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_value(u32 *attr, void *value, u32 len) +{ + DEBUGFUNC("fm10k_tlv_attr_get_value"); + + /* verify pointers are not NULL */ + if (!attr || !value) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + if (len == 8) + *(u64 *)value = ((u64)attr[2] << 32) | attr[1]; + else if (len == 4) + *(u32 *)value = attr[1]; + else if (len == 2) + *(u16 *)value = (u16)attr[1]; + else + *(u8 *)value = (u8)attr[1]; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_put_le_struct - Store little endian structure in message + * @msg: Pointer to message block + * @attr_id: Attribute ID + * @le_struct: Pointer to structure to be written + * @len: Size of le_struct + * + * This function will place a little endian structure value in a message + * attribute. The function will return success provided that all pointers + * are valid and length is a non-zero multiple of 4. + **/ +s32 fm10k_tlv_attr_put_le_struct(u32 *msg, u16 attr_id, + const void *le_struct, u32 len) +{ + const __le32 *le32_ptr = (const __le32 *)le_struct; + u32 *attr; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_put_le_struct"); + + /* verify non-null msg and len is in 32 bit words */ + if (!msg || !len || (len % 4)) + return FM10K_ERR_PARAM; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + /* copy le32 structure into host byte order at 32b boundaries */ + for (i = 0; i < (len / 4); i++) + attr[i + 1] = FM10K_LE32_TO_CPU(le32_ptr[i]); + + /* record attribute header, update message length */ + len <<= FM10K_TLV_LEN_SHIFT; + attr[0] = len | attr_id; + + /* add header length to length */ + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += FM10K_TLV_LEN_ALIGN(len); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_get_le_struct - Get little endian struct form attribute + * @attr: Pointer to attribute + * @le_struct: Pointer to structure to be written + * @len: Size of structure + * + * This function will place a little endian structure in the buffer + * pointed to by le_struct. The function will return success + * provided that pointers are valid and the len value matches the + * attribute length. + **/ +s32 fm10k_tlv_attr_get_le_struct(u32 *attr, void *le_struct, u32 len) +{ + __le32 *le32_ptr = (__le32 *)le_struct; + u32 i; + + DEBUGFUNC("fm10k_tlv_attr_get_le_struct"); + + /* verify pointers are not NULL */ + if (!le_struct || !attr) + return FM10K_ERR_PARAM; + + if ((*attr >> FM10K_TLV_LEN_SHIFT) != len) + return FM10K_ERR_PARAM; + + attr++; + + for (i = 0; len; i++, len -= 4) + le32_ptr[i] = FM10K_CPU_TO_LE32(attr[i]); + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * @attr_id: Attribute ID + * + * This function will mark off a new nested region for encapsulating + * a given set of attributes. The idea is if you wish to place a secondary + * structure within the message this mechanism allows for that. The + * function will return NULL on failure, and a pointer to the start + * of the nested attributes on success. + **/ +u32 *fm10k_tlv_attr_nest_start(u32 *msg, u16 attr_id) +{ + u32 *attr; + + DEBUGFUNC("fm10k_tlv_attr_nest_start"); + + /* verify pointer is not NULL */ + if (!msg) + return NULL; + + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + + attr[0] = attr_id; + + /* return pointer to nest header */ + return attr; +} + +/** + * fm10k_tlv_attr_nest_start - Start a set of nested attributes + * @msg: Pointer to message block + * + * This function closes off an existing set of nested attributes. The + * message pointer should be pointing to the parent of the nest. So in + * the case of a nest within the nest this would be the outer nest pointer. + * This function will return success provided all pointers are valid. + **/ +s32 fm10k_tlv_attr_nest_stop(u32 *msg) +{ + u32 *attr; + u32 len; + + DEBUGFUNC("fm10k_tlv_attr_nest_stop"); + + /* verify pointer is not NULL */ + if (!msg) + return FM10K_ERR_PARAM; + + /* locate the nested header and retrieve its length */ + attr = &msg[FM10K_TLV_DWORD_LEN(*msg)]; + len = (attr[0] >> FM10K_TLV_LEN_SHIFT) << FM10K_TLV_LEN_SHIFT; + + /* only include nest if data was added to it */ + if (len) { + len += FM10K_TLV_HDR_LEN << FM10K_TLV_LEN_SHIFT; + *msg += len; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_validate - Validate attribute metadata + * @attr: Pointer to attribute + * @tlv_attr: Type and length info for attribute + * + * This function does some basic validation of the input TLV. It + * verifies the length, and in the case of null terminated strings + * it verifies that the last byte is null. The function will + * return FM10K_ERR_PARAM if any attribute is malformed, otherwise + * it returns 0. + **/ +STATIC s32 fm10k_tlv_attr_validate(u32 *attr, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 attr_id = *attr & FM10K_TLV_ID_MASK; + u16 len = *attr >> FM10K_TLV_LEN_SHIFT; + + DEBUGFUNC("fm10k_tlv_attr_validate"); + + /* verify this is an attribute and not a message */ + if (*attr & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT)) + return FM10K_ERR_PARAM; + + /* search through the list of attributes to find a matching ID */ + while (tlv_attr->id < attr_id) + tlv_attr++; + + /* if didn't find a match then we should exit */ + if (tlv_attr->id != attr_id) + return FM10K_NOT_IMPLEMENTED; + + /* move to start of attribute data */ + attr++; + + switch (tlv_attr->type) { + case FM10K_TLV_NULL_STRING: + if (!len || + (attr[(len - 1) / 4] & (0xFF << (8 * ((len - 1) % 4))))) + return FM10K_ERR_PARAM; + if (len > tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_MAC_ADDR: + if (len != ETH_ALEN) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_BOOL: + if (len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_UNSIGNED: + case FM10K_TLV_SIGNED: + if (len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_LE_STRUCT: + /* struct must be 4 byte aligned */ + if ((len % 4) || len != tlv_attr->len) + return FM10K_ERR_PARAM; + break; + case FM10K_TLV_NESTED: + /* nested attributes must be 4 byte aligned */ + if (len % 4) + return FM10K_ERR_PARAM; + break; + default: + /* attribute id is mapped to bad value */ + return FM10K_ERR_PARAM; + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_attr_parse - Parses stream of attribute data + * @attr: Pointer to attribute list + * @results: Pointer array to store pointers to attributes + * @tlv_attr: Type and length info for attributes + * + * This function validates a stream of attributes and parses them + * up into an array of pointers stored in results. The function will + * return FM10K_ERR_PARAM on any input or message error, + * FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array + * and 0 on success. + **/ +s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results, + const struct fm10k_tlv_attr *tlv_attr) +{ + u32 i, attr_id, offset = 0; + s32 err = 0; + u16 len; + + DEBUGFUNC("fm10k_tlv_attr_parse"); + + /* verify pointers are not NULL */ + if (!attr || !results) + return FM10K_ERR_PARAM; + + /* initialize results to NULL */ + for (i = 0; i < FM10K_TLV_RESULTS_MAX; i++) + results[i] = NULL; + + /* pull length from the message header */ + len = *attr >> FM10K_TLV_LEN_SHIFT; + + /* no attributes to parse if there is no length */ + if (!len) + return FM10K_SUCCESS; + + /* no attributes to parse, just raw data, message becomes attribute */ + if (!tlv_attr) { + results[0] = attr; + return FM10K_SUCCESS; + } + + /* move to start of attribute data */ + attr++; + + /* run through list parsing all attributes */ + while (offset < len) { + attr_id = *attr & FM10K_TLV_ID_MASK; + + if (attr_id < FM10K_TLV_RESULTS_MAX) + err = fm10k_tlv_attr_validate(attr, tlv_attr); + else + err = FM10K_NOT_IMPLEMENTED; + + if (err < 0) + return err; + if (!err) + results[attr_id] = attr; + + /* update offset */ + offset += FM10K_TLV_DWORD_LEN(*attr) * 4; + + /* move to next attribute */ + attr = &attr[FM10K_TLV_DWORD_LEN(*attr)]; + } + + /* we should find ourselves at the end of the list */ + if (offset != len) + return FM10K_ERR_PARAM; + + return FM10K_SUCCESS; +} + +/** + * fm10k_tlv_msg_parse - Parses message header and calls function handler + * @hw: Pointer to hardware structure + * @msg: Pointer to message + * @mbx: Pointer to mailbox information structure + * @func: Function array containing list of message handling functions + * + * This function should be the first function called upon receiving a + * message. The handler will identify the message type and call the correct + * handler for the given message. It will return the value from the function + * call on a recognized message type, otherwise it will return + * FM10K_NOT_IMPLEMENTED on an unrecognized type. + **/ +s32 fm10k_tlv_msg_parse(struct fm10k_hw *hw, u32 *msg, + struct fm10k_mbx_info *mbx, + const struct fm10k_msg_data *data) +{ + u32 *results[FM10K_TLV_RESULTS_MAX]; + u32 msg_id; + s32 err; + + DEBUGFUNC("fm10k_tlv_msg_parse"); + + /* verify pointer is not NULL */ + if (!msg || !data) + return FM10K_ERR_PARAM; + + /* verify this is a message and not an attribute */ + if (!(*msg & (FM10K_TLV_FLAGS_MSG << FM10K_TLV_FLAGS_SHIFT))) + return FM10K_ERR_PARAM; + + /* grab message ID */ + msg_id = *msg & FM10K_TLV_ID_MASK; + + while (data->id < msg_id) + data++; + + /* if we didn't find it then pass it up as an error */ + if (data->id != msg_id) { + while (data->id != FM10K_TLV_ERROR) + data++; + } + + /* parse the attributes into the results list */ + err = fm10k_tlv_attr_parse(msg, results, data->attr); + if (err < 0) + return err; + + return data->func(hw, results, mbx); +} + +/** + * fm10k_tlv_msg_error - Default handler for unrecognized TLV message IDs + * @hw: Pointer to hardware structure + * @results: Pointer array to message, results[0] is pointer to message + * @mbx: Unused mailbox pointer + * + * This function is a default handler for unrecognized messages. At a + * a minimum it just indicates that the message requested was + * unimplemented. + **/ +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_3PARAMETER(hw, results, mbx); + DEBUGOUT1("Unknown message ID %u\n", **results & FM10K_TLV_ID_MASK); + return FM10K_NOT_IMPLEMENTED; +} + +STATIC const unsigned char test_str[] = "fm10k"; +STATIC const unsigned char test_mac[ETH_ALEN] = { 0x12, 0x34, 0x56, + 0x78, 0x9a, 0xbc }; +STATIC const u16 test_vlan = 0x0FED; +STATIC const u64 test_u64 = 0xfedcba9876543210ull; +STATIC const u32 test_u32 = 0x87654321; +STATIC const u16 test_u16 = 0x8765; +STATIC const u8 test_u8 = 0x87; +STATIC const s64 test_s64 = -0x123456789abcdef0ll; +STATIC const s32 test_s32 = -0x1235678; +STATIC const s16 test_s16 = -0x1234; +STATIC const s8 test_s8 = -0x12; +STATIC const __le32 test_le[2] = { FM10K_CPU_TO_LE32(0x12345678), + FM10K_CPU_TO_LE32(0x9abcdef0)}; + +/* The message below is meant to be used as a test message to demonstrate + * how to use the TLV interface and to test the types. Normally this code + * be compiled out by stripping the code wrapped in FM10K_TLV_TEST_MSG + */ +const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = { + FM10K_TLV_ATTR_NULL_STRING(FM10K_TEST_MSG_STRING, 80), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_TEST_MSG_MAC_ADDR), + FM10K_TLV_ATTR_U8(FM10K_TEST_MSG_U8), + FM10K_TLV_ATTR_U16(FM10K_TEST_MSG_U16), + FM10K_TLV_ATTR_U32(FM10K_TEST_MSG_U32), + FM10K_TLV_ATTR_U64(FM10K_TEST_MSG_U64), + FM10K_TLV_ATTR_S8(FM10K_TEST_MSG_S8), + FM10K_TLV_ATTR_S16(FM10K_TEST_MSG_S16), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_S32), + FM10K_TLV_ATTR_S64(FM10K_TEST_MSG_S64), + FM10K_TLV_ATTR_LE_STRUCT(FM10K_TEST_MSG_LE_STRUCT, 8), + FM10K_TLV_ATTR_NESTED(FM10K_TEST_MSG_NESTED), + FM10K_TLV_ATTR_S32(FM10K_TEST_MSG_RESULT), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_tlv_msg_test_generate_data - Stuff message with data + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with attribute data + **/ +STATIC void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags) +{ + DEBUGFUNC("fm10k_tlv_msg_test_generate_data"); + + if (attr_flags & (1 << FM10K_TEST_MSG_STRING)) + fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING, + test_str); + if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR)) + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR, + test_mac, test_vlan); + if (attr_flags & (1 << FM10K_TEST_MSG_U8)) + fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8); + if (attr_flags & (1 << FM10K_TEST_MSG_U16)) + fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16); + if (attr_flags & (1 << FM10K_TEST_MSG_U32)) + fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32); + if (attr_flags & (1 << FM10K_TEST_MSG_U64)) + fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64); + if (attr_flags & (1 << FM10K_TEST_MSG_S8)) + fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8); + if (attr_flags & (1 << FM10K_TEST_MSG_S16)) + fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16); + if (attr_flags & (1 << FM10K_TEST_MSG_S32)) + fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32); + if (attr_flags & (1 << FM10K_TEST_MSG_S64)) + fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64); + if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT)) + fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT, + test_le, 8); +} + +/** + * fm10k_tlv_msg_test_create - Create a test message testing all attributes + * @msg: Pointer to message + * @attr_flags: List of flags indicating what attributes to add + * + * This function is meant to load a message buffer with all attribute types + * including a nested attribute. + **/ +void fm10k_tlv_msg_test_create(u32 *msg, u32 attr_flags) +{ + u32 *nest = NULL; + + DEBUGFUNC("fm10k_tlv_msg_test_create"); + + fm10k_tlv_msg_init(msg, FM10K_TLV_MSG_ID_TEST); + + fm10k_tlv_msg_test_generate_data(msg, attr_flags); + + /* check for nested attributes */ + attr_flags >>= FM10K_TEST_MSG_NESTED; + + if (attr_flags) { + nest = fm10k_tlv_attr_nest_start(msg, FM10K_TEST_MSG_NESTED); + + fm10k_tlv_msg_test_generate_data(nest, attr_flags); + + fm10k_tlv_attr_nest_stop(msg); + } +} + +/** + * fm10k_tlv_msg_test - Validate all results on test message receive + * @hw: Pointer to hardware structure + * @results: Pointer array to attributes in the message + * @mbx: Pointer to mailbox information structure + * + * This function does a check to verify all attributes match what the test + * message placed in the message buffer. It is the default handler + * for TLV test messages. + **/ +s32 fm10k_tlv_msg_test(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u32 *nest_results[FM10K_TLV_RESULTS_MAX]; + unsigned char result_str[80]; + unsigned char result_mac[ETH_ALEN]; + s32 err = FM10K_SUCCESS; + __le32 result_le[2]; + u16 result_vlan; + u64 result_u64; + u32 result_u32; + u16 result_u16; + u8 result_u8; + s64 result_s64; + s32 result_s32; + s16 result_s16; + s8 result_s8; + u32 reply[3]; + + DEBUGFUNC("fm10k_tlv_msg_test"); + + /* retrieve results of a previous test */ + if (!!results[FM10K_TEST_MSG_RESULT]) + return fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_RESULT], + &mbx->test_result); + +parse_nested: + if (!!results[FM10K_TEST_MSG_STRING]) { + err = fm10k_tlv_attr_get_null_string( + results[FM10K_TEST_MSG_STRING], + result_str); + if (!err && memcmp(test_str, result_str, sizeof(test_str))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_MAC_ADDR]) { + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_TEST_MSG_MAC_ADDR], + result_mac, &result_vlan); + if (!err && memcmp(test_mac, result_mac, ETH_ALEN)) + err = FM10K_ERR_INVALID_VALUE; + if (!err && test_vlan != result_vlan) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U8]) { + err = fm10k_tlv_attr_get_u8(results[FM10K_TEST_MSG_U8], + &result_u8); + if (!err && test_u8 != result_u8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U16]) { + err = fm10k_tlv_attr_get_u16(results[FM10K_TEST_MSG_U16], + &result_u16); + if (!err && test_u16 != result_u16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U32]) { + err = fm10k_tlv_attr_get_u32(results[FM10K_TEST_MSG_U32], + &result_u32); + if (!err && test_u32 != result_u32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_U64]) { + err = fm10k_tlv_attr_get_u64(results[FM10K_TEST_MSG_U64], + &result_u64); + if (!err && test_u64 != result_u64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S8]) { + err = fm10k_tlv_attr_get_s8(results[FM10K_TEST_MSG_S8], + &result_s8); + if (!err && test_s8 != result_s8) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S16]) { + err = fm10k_tlv_attr_get_s16(results[FM10K_TEST_MSG_S16], + &result_s16); + if (!err && test_s16 != result_s16) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S32]) { + err = fm10k_tlv_attr_get_s32(results[FM10K_TEST_MSG_S32], + &result_s32); + if (!err && test_s32 != result_s32) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_S64]) { + err = fm10k_tlv_attr_get_s64(results[FM10K_TEST_MSG_S64], + &result_s64); + if (!err && test_s64 != result_s64) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + if (!!results[FM10K_TEST_MSG_LE_STRUCT]) { + err = fm10k_tlv_attr_get_le_struct( + results[FM10K_TEST_MSG_LE_STRUCT], + result_le, + sizeof(result_le)); + if (!err && memcmp(test_le, result_le, sizeof(test_le))) + err = FM10K_ERR_INVALID_VALUE; + if (err) + goto report_result; + } + + if (!!results[FM10K_TEST_MSG_NESTED]) { + /* clear any pointers */ + memset(nest_results, 0, sizeof(nest_results)); + + /* parse the nested attributes into the nest results list */ + err = fm10k_tlv_attr_parse(results[FM10K_TEST_MSG_NESTED], + nest_results, + fm10k_tlv_msg_test_attr); + if (err) + goto report_result; + + /* loop back through to the start */ + results = nest_results; + goto parse_nested; + } + +report_result: + /* generate reply with test result */ + fm10k_tlv_msg_init(reply, FM10K_TLV_MSG_ID_TEST); + fm10k_tlv_attr_put_s32(reply, FM10K_TEST_MSG_RESULT, err); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, reply); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_tlv.h b/lib/librte_pmd_fm10k/base/fm10k_tlv.h new file mode 100644 index 0000000..ad97236 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_tlv.h @@ -0,0 +1,199 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TLV_H_ +#define _FM10K_TLV_H_ + +/* forward declaration */ +struct fm10k_msg_data; + +#include "fm10k_type.h" + +/* Message / Argument header format + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Length | Flags | Type / ID | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * The message header format described here is used for messages that are + * passed between the PF and the VF. To allow for messages larger then + * mailbox size we will provide a message with the above header and it + * will be segmented and transported to the mailbox to the other side where + * it is reassembled. It contains the following fields: + * Len: Length of the message in bytes excluding the message header + * Flags: TBD + * Rule: These will be the message/argument types we pass + */ +/* message data header */ +#define FM10K_TLV_ID_SHIFT 0 +#define FM10K_TLV_ID_SIZE 16 +#define FM10K_TLV_ID_MASK ((1u << FM10K_TLV_ID_SIZE) - 1) +#define FM10K_TLV_FLAGS_SHIFT 16 +#define FM10K_TLV_FLAGS_MSG 0x1 +#define FM10K_TLV_FLAGS_SIZE 4 +#define FM10K_TLV_LEN_SHIFT 20 +#define FM10K_TLV_LEN_SIZE 12 + +#define FM10K_TLV_HDR_LEN 4ul +#define FM10K_TLV_LEN_ALIGN_MASK \ + ((FM10K_TLV_HDR_LEN - 1) << FM10K_TLV_LEN_SHIFT) +#define FM10K_TLV_LEN_ALIGN(tlv) \ + (((tlv) + FM10K_TLV_LEN_ALIGN_MASK) & ~FM10K_TLV_LEN_ALIGN_MASK) +#define FM10K_TLV_DWORD_LEN(tlv) \ + ((u16)((FM10K_TLV_LEN_ALIGN(tlv)) >> (FM10K_TLV_LEN_SHIFT + 2)) + 1) + +#define FM10K_TLV_RESULTS_MAX 32 + +enum fm10k_tlv_type { + FM10K_TLV_NULL_STRING, + FM10K_TLV_MAC_ADDR, + FM10K_TLV_BOOL, + FM10K_TLV_UNSIGNED, + FM10K_TLV_SIGNED, + FM10K_TLV_LE_STRUCT, + FM10K_TLV_NESTED, + FM10K_TLV_MAX_TYPE +}; + +#define FM10K_TLV_ERROR (~0u) + +struct fm10k_tlv_attr { + unsigned int id; + enum fm10k_tlv_type type; + u16 len; +}; + +#define FM10K_TLV_ATTR_NULL_STRING(id, len) { id, FM10K_TLV_NULL_STRING, len } +#define FM10K_TLV_ATTR_MAC_ADDR(id) { id, FM10K_TLV_MAC_ADDR, 6 } +#define FM10K_TLV_ATTR_BOOL(id) { id, FM10K_TLV_BOOL, 0 } +#define FM10K_TLV_ATTR_U8(id) { id, FM10K_TLV_UNSIGNED, 1 } +#define FM10K_TLV_ATTR_U16(id) { id, FM10K_TLV_UNSIGNED, 2 } +#define FM10K_TLV_ATTR_U32(id) { id, FM10K_TLV_UNSIGNED, 4 } +#define FM10K_TLV_ATTR_U64(id) { id, FM10K_TLV_UNSIGNED, 8 } +#define FM10K_TLV_ATTR_S8(id) { id, FM10K_TLV_SIGNED, 1 } +#define FM10K_TLV_ATTR_S16(id) { id, FM10K_TLV_SIGNED, 2 } +#define FM10K_TLV_ATTR_S32(id) { id, FM10K_TLV_SIGNED, 4 } +#define FM10K_TLV_ATTR_S64(id) { id, FM10K_TLV_SIGNED, 8 } +#define FM10K_TLV_ATTR_LE_STRUCT(id, len) { id, FM10K_TLV_LE_STRUCT, len } +#define FM10K_TLV_ATTR_NESTED(id) { id, FM10K_TLV_NESTED } +#define FM10K_TLV_ATTR_LAST { FM10K_TLV_ERROR } + +struct fm10k_msg_data { + unsigned int id; + const struct fm10k_tlv_attr *attr; + s32 (*func)(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +}; + +#define FM10K_MSG_HANDLER(id, attr, func) { id, attr, func } + +s32 fm10k_tlv_msg_init(u32 *, u16); +s32 fm10k_tlv_attr_put_null_string(u32 *, u16, const unsigned char *); +s32 fm10k_tlv_attr_get_null_string(u32 *, unsigned char *); +s32 fm10k_tlv_attr_put_mac_vlan(u32 *, u16, const u8 *, u16); +s32 fm10k_tlv_attr_get_mac_vlan(u32 *, u8 *, u16 *); +s32 fm10k_tlv_attr_put_bool(u32 *, u16); +s32 fm10k_tlv_attr_put_value(u32 *, u16, s64, u32); +#define fm10k_tlv_attr_put_u8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_u16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_u32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_u64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +#define fm10k_tlv_attr_put_s8(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 1) +#define fm10k_tlv_attr_put_s16(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 2) +#define fm10k_tlv_attr_put_s32(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 4) +#define fm10k_tlv_attr_put_s64(msg, attr_id, val) \ + fm10k_tlv_attr_put_value(msg, attr_id, val, 8) +s32 fm10k_tlv_attr_get_value(u32 *, void *, u32); +#define fm10k_tlv_attr_get_u8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u8)) +#define fm10k_tlv_attr_get_u16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u16)) +#define fm10k_tlv_attr_get_u32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u32)) +#define fm10k_tlv_attr_get_u64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(u64)) +#define fm10k_tlv_attr_get_s8(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s8)) +#define fm10k_tlv_attr_get_s16(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s16)) +#define fm10k_tlv_attr_get_s32(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s32)) +#define fm10k_tlv_attr_get_s64(attr, ptr) \ + fm10k_tlv_attr_get_value(attr, ptr, sizeof(s64)) +s32 fm10k_tlv_attr_put_le_struct(u32 *, u16, const void *, u32); +s32 fm10k_tlv_attr_get_le_struct(u32 *, void *, u32); +u32 *fm10k_tlv_attr_nest_start(u32 *, u16); +s32 fm10k_tlv_attr_nest_stop(u32 *); +s32 fm10k_tlv_attr_parse(u32 *, u32 **, const struct fm10k_tlv_attr *); +s32 fm10k_tlv_msg_parse(struct fm10k_hw *, u32 *, struct fm10k_mbx_info *, + const struct fm10k_msg_data *); +s32 fm10k_tlv_msg_error(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_ID_TEST 0 + +enum fm10k_tlv_test_attr_id { + FM10K_TEST_MSG_UNSET, + FM10K_TEST_MSG_STRING, + FM10K_TEST_MSG_MAC_ADDR, + FM10K_TEST_MSG_U8, + FM10K_TEST_MSG_U16, + FM10K_TEST_MSG_U32, + FM10K_TEST_MSG_U64, + FM10K_TEST_MSG_S8, + FM10K_TEST_MSG_S16, + FM10K_TEST_MSG_S32, + FM10K_TEST_MSG_S64, + FM10K_TEST_MSG_LE_STRUCT, + FM10K_TEST_MSG_NESTED, + FM10K_TEST_MSG_RESULT, + FM10K_TEST_MSG_MAX +}; + +extern const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[]; +void fm10k_tlv_msg_test_create(u32 *, u32); +s32 fm10k_tlv_msg_test(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); + +#define FM10K_TLV_MSG_TEST_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_MSG_ID_TEST, fm10k_tlv_msg_test_attr, func) +#define FM10K_TLV_MSG_ERROR_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_TLV_ERROR, NULL, func) +#endif /* _FM10K_MSG_H_ */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_type.h b/lib/librte_pmd_fm10k/base/fm10k_type.h new file mode 100644 index 0000000..534fab4 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_type.h @@ -0,0 +1,937 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_TYPE_H_ +#define _FM10K_TYPE_H_ + +/* forward declaration */ +struct fm10k_hw; + +#include "fm10k_osdep.h" +#include "fm10k_mbx.h" + +#define FM10K_INTEL_VENDOR_ID 0x8086 +#define FM10K_DEV_ID_PF 0x15A4 +#define FM10K_DEV_ID_VF 0x15A5 + +#define FM10K_MAX_QUEUES 256 +#define FM10K_MAX_QUEUES_PF 128 +#define FM10K_MAX_QUEUES_POOL 16 + +#define FM10K_48_BIT_MASK 0x0000FFFFFFFFFFFFull +#define FM10K_STAT_VALID 0x80000000 + +/* PCI Bus Info */ +#define FM10K_PCIE_LINK_CAP 0x7C +#define FM10K_PCIE_LINK_STATUS 0x82 +#define FM10K_PCIE_LINK_WIDTH 0x3F0 +#define FM10K_PCIE_LINK_WIDTH_1 0x10 +#define FM10K_PCIE_LINK_WIDTH_2 0x20 +#define FM10K_PCIE_LINK_WIDTH_4 0x40 +#define FM10K_PCIE_LINK_WIDTH_8 0x80 +#define FM10K_PCIE_LINK_SPEED 0xF +#define FM10K_PCIE_LINK_SPEED_2500 0x1 +#define FM10K_PCIE_LINK_SPEED_5000 0x2 +#define FM10K_PCIE_LINK_SPEED_8000 0x3 + +/* PCIe payload size */ +#define FM10K_PCIE_DEV_CAP 0x74 +#define FM10K_PCIE_DEV_CAP_PAYLOAD 0x07 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_256 0x01 +#define FM10K_PCIE_DEV_CAP_PAYLOAD_512 0x02 +#define FM10K_PCIE_DEV_CTRL 0x78 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD 0xE0 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_128 0x00 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_256 0x20 +#define FM10K_PCIE_DEV_CTRL_PAYLOAD_512 0x40 + +/* PCIe MSI-X Capability info */ +#define FM10K_PCI_MSIX_MSG_CTRL 0xB2 +#define FM10K_PCI_MSIX_MSG_CTRL_TBL_SZ_MASK 0x7FF +#define FM10K_MAX_MSIX_VECTORS 256 +#define FM10K_MAX_VECTORS_PF 256 +#define FM10K_MAX_VECTORS_POOL 32 + +/* PCIe SR-IOV Info */ +#define FM10K_PCIE_SRIOV_CTRL 0x190 +#define FM10K_PCIE_SRIOV_CTRL_VFARI 0x10 + +#define FM10K_SUCCESS 0 +#define FM10K_ERR_DEVICE_NOT_SUPPORTED -1 +#define FM10K_ERR_PARAM -2 +#define FM10K_ERR_NO_RESOURCES -3 +#define FM10K_ERR_REQUESTS_PENDING -4 +#define FM10K_ERR_RESET_REQUESTED -5 +#define FM10K_ERR_DMA_PENDING -6 +#define FM10K_ERR_RESET_FAILED -7 +#define FM10K_ERR_INVALID_MAC_ADDR -8 +#define FM10K_ERR_INVALID_VALUE -9 +#define FM10K_NOT_IMPLEMENTED 0x7FFFFFFF + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p) +#define UNREFERENCED_2PARAMETER(_p, _q) do { (_p); (_q); } while (0) +#define UNREFERENCED_3PARAMETER(_p, _q, _r) do { (_p); (_q); (_r); } while (0) + +/* Start of PF registers */ +#define FM10K_CTRL 0x0000 +#define FM10K_CTRL_BAR4_ALLOWED 0x00000004 + +#define FM10K_CTRL_EXT 0x0001 +#define FM10K_CTRL_EXT_NS_DIS 0x00000001 +#define FM10K_CTRL_EXT_RO_DIS 0x00000002 +#define FM10K_CTRL_EXT_SWITCH_LOOPBACK 0x00000004 +#define FM10K_EXVET 0x0002 +#define FM10K_EXVET_ETHERTYPE_MASK 0x000000FF +#define FM10K_EXVET_TAG_SIZE_SHIFT 16 +#define FM10K_EXVET_AFTER_VLAN 0x00040000 +#define FM10K_GCR 0x0003 +#define FM10K_FACTPS 0x0004 +#define FM10K_GCR_EXT 0x0005 + +/* Interrupt control registers */ +#define FM10K_EICR 0x0006 +#define FM10K_EICR_PCA_FAULT 0x00000001 +#define FM10K_EICR_THI_FAULT 0x00000004 +#define FM10K_EICR_FUM_FAULT 0x00000020 +#define FM10K_EICR_FAULT_MASK 0x0000003F +#define FM10K_EICR_MAILBOX 0x00000040 +#define FM10K_EICR_SWITCHREADY 0x00000080 +#define FM10K_EICR_SWITCHNOTREADY 0x00000100 +#define FM10K_EICR_SWITCHINTERRUPT 0x00000200 +#define FM10K_EICR_SRAMERROR 0x00000400 +#define FM10K_EICR_VFLR 0x00000800 +#define FM10K_EICR_MAXHOLDTIME 0x00001000 +#define FM10K_EIMR 0x0007 +#define FM10K_EIMR_PCA_FAULT 0x00000001 +#define FM10K_EIMR_THI_FAULT 0x00000010 +#define FM10K_EIMR_FUM_FAULT 0x00000400 +#define FM10K_EIMR_MAILBOX 0x00001000 +#define FM10K_EIMR_SWITCHREADY 0x00004000 +#define FM10K_EIMR_SWITCHNOTREADY 0x00010000 +#define FM10K_EIMR_SWITCHINTERRUPT 0x00040000 +#define FM10K_EIMR_SRAMERROR 0x00100000 +#define FM10K_EIMR_VFLR 0x00400000 +#define FM10K_EIMR_MAXHOLDTIME 0x01000000 +#define FM10K_EIMR_ALL 0x55555555 +#define FM10K_EIMR_DISABLE(NAME) ((FM10K_EIMR_ ## NAME) << 0) +#define FM10K_EIMR_ENABLE(NAME) ((FM10K_EIMR_ ## NAME) << 1) +#define FM10K_FAULT_ADDR_LO 0x0 +#define FM10K_FAULT_ADDR_HI 0x1 +#define FM10K_FAULT_SPECINFO 0x2 +#define FM10K_FAULT_FUNC 0x3 +#define FM10K_FAULT_SIZE 0x4 +#define FM10K_FAULT_FUNC_VALID 0x00008000 +#define FM10K_FAULT_FUNC_PF 0x00004000 +#define FM10K_FAULT_FUNC_VF_MASK 0x00003F00 +#define FM10K_FAULT_FUNC_VF_SHIFT 8 +#define FM10K_FAULT_FUNC_TYPE_MASK 0x000000FF + +#define FM10K_PCA_FAULT 0x0008 +#define FM10K_THI_FAULT 0x0010 +#define FM10K_FUM_FAULT 0x001C + +/* Rx queue timeout indicator */ +#define FM10K_MAXHOLDQ(_n) ((_n) + 0x0020) + +/* Switch Manager info */ +#define FM10K_SM_AREA(_n) ((_n) + 0x0028) + +/* GLORT mapping registers */ +#define FM10K_DGLORTMAP(_n) ((_n) + 0x0030) +#define FM10K_DGLORT_COUNT 8 +#define FM10K_DGLORTMAP_MASK_SHIFT 16 +#define FM10K_DGLORTMAP_ANY 0x00000000 +#define FM10K_DGLORTMAP_NONE 0x0000FFFF +#define FM10K_DGLORTMAP_ZERO 0xFFFF0000 +#define FM10K_DGLORTDEC(_n) ((_n) + 0x0038) +#define FM10K_DGLORTDEC_VSILENGTH_SHIFT 4 +#define FM10K_DGLORTDEC_VSIBASE_SHIFT 7 +#define FM10K_DGLORTDEC_PCLENGTH_SHIFT 14 +#define FM10K_DGLORTDEC_QBASE_SHIFT 16 +#define FM10K_DGLORTDEC_RSSLENGTH_SHIFT 24 +#define FM10K_DGLORTDEC_INNERRSS_ENABLE 0x08000000 +#define FM10K_TUNNEL_CFG 0x0040 +#define FM10K_TUNNEL_CFG_NVGRE_SHIFT 16 +#define FM10K_TUNNEL_CFG_GENEVE 0x0041 +#define FM10K_SWPRI_MAP(_n) ((_n) + 0x0050) +#define FM10K_SWPRI_MAX 16 +#define FM10K_RSSRK(_n, _m) (((_n) * 0x10) + (_m) + 0x0800) +#define FM10K_RSSRK_SIZE 10 +#define FM10K_RSSRK_ENTRIES_PER_REG 4 +#define FM10K_RETA(_n, _m) (((_n) * 0x20) + (_m) + 0x1000) +#define FM10K_RETA_SIZE 32 +#define FM10K_RETA_ENTRIES_PER_REG 4 +#define FM10K_MAX_RSS_INDICES 128 + +/* Rate limiting registers */ +#define FM10K_TC_CREDIT(_n) ((_n) + 0x2000) +#define FM10K_TC_CREDIT_CREDIT_MASK 0x001FFFFF +#define FM10K_TC_MAXCREDIT(_n) ((_n) + 0x2040) +#define FM10K_TC_MAXCREDIT_64K 0x00010000 +#define FM10K_TC_RATE(_n) ((_n) + 0x2080) +#define FM10K_TC_RATE_QUANTA_MASK 0x0000FFFF +#define FM10K_TC_RATE_INTERVAL_4US_GEN1 0x00020000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN2 0x00040000 +#define FM10K_TC_RATE_INTERVAL_4US_GEN3 0x00080000 +#define FM10K_TC_RATE_STATUS 0x20C0 +#define FM10K_PAUSE 0x20C2 + +/* DMA control registers */ +#define FM10K_DMA_CTRL 0x20C3 +#define FM10K_DMA_CTRL_TX_ENABLE 0x00000001 +#define FM10K_DMA_CTRL_TX_HOST_PENDING 0x00000002 +#define FM10K_DMA_CTRL_TX_DATA 0x00000004 +#define FM10K_DMA_CTRL_TX_ACTIVE 0x00000008 +#define FM10K_DMA_CTRL_RX_ENABLE 0x00000010 +#define FM10K_DMA_CTRL_RX_HOST_PENDING 0x00000020 +#define FM10K_DMA_CTRL_RX_DATA 0x00000040 +#define FM10K_DMA_CTRL_RX_ACTIVE 0x00000080 +#define FM10K_DMA_CTRL_RX_DESC_SIZE 0x00000100 +#define FM10K_DMA_CTRL_MINMSS_SHIFT 9 +#define FM10K_DMA_CTRL_MINMSS_64 0x00008000 +#define FM10K_DMA_CTRL_MAX_HOLD_TIME_SHIFT 23 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN3 0x04800000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN2 0x04000000 +#define FM10K_DMA_CTRL_MAX_HOLD_1US_GEN1 0x03800000 +#define FM10K_DMA_CTRL_DATAPATH_RESET 0x20000000 +#define FM10K_DMA_CTRL_MAXNUMOFQ_MASK 0xC0000000 +#define FM10K_DMA_CTRL_32_DESC 0x00000000 +#define FM10K_DMA_CTRL_64_DESC 0x40000000 +#define FM10K_DMA_CTRL_128_DESC 0x80000000 + +#define FM10K_DMA_CTRL2 0x20C4 +#define FM10K_DMA_CTRL2_TX_FRAME_SPACING_SHIFT 5 +#define FM10K_DMA_CTRL2_SWITCH_READY 0x00002000 +#define FM10K_DMA_CTRL2_RX_DESC_READ_PRIO_SHIFT 14 +#define FM10K_DMA_CTRL2_TX_DESC_READ_PRIO_SHIFT 17 +#define FM10K_DMA_CTRL2_TX_DATA_READ_PRIO_SHIFT 20 + +/* TSO flags configuration + * First packet contains all flags except for fin and psh + * Middle packet contains only urg and ack + * Last packet contains urg, ack, fin, and psh + */ +#define FM10K_TSO_FLAGS_LOW 0x00300FF6 +#define FM10K_TSO_FLAGS_HI 0x00000039 +#define FM10K_DTXTCPFLGL 0x20C5 +#define FM10K_DTXTCPFLGH 0x20C6 + +#define FM10K_TPH_CTRL 0x20C7 +#define FM10K_TPH_CTRL_DISABLE_READ_HINT 0x00000080 +#define FM10K_MRQC(_n) ((_n) + 0x2100) +#define FM10K_MRQC_TCP_IPV4 0x00000001 +#define FM10K_MRQC_IPV4 0x00000002 +#define FM10K_MRQC_IPV6 0x00000010 +#define FM10K_MRQC_TCP_IPV6 0x00000020 +#define FM10K_MRQC_UDP_IPV4 0x00000040 +#define FM10K_MRQC_UDP_IPV6 0x00000080 + +#define FM10K_TQMAP(_n) ((_n) + 0x2800) +#define FM10K_TQMAP_TABLE_SIZE 2048 +#define FM10K_RQMAP(_n) ((_n) + 0x3000) +#define FM10K_RQMAP_TABLE_SIZE 2048 + +/* Hardware Statistics */ +#define FM10K_STATS_TIMEOUT 0x3800 +#define FM10K_STATS_UR 0x3801 +#define FM10K_STATS_CA 0x3802 +#define FM10K_STATS_UM 0x3803 +#define FM10K_STATS_XEC 0x3804 +#define FM10K_STATS_VLAN_DROP 0x3805 +#define FM10K_STATS_LOOPBACK_DROP 0x3806 +#define FM10K_STATS_NODESC_DROP 0x3807 + +/* Timesync registers */ +#define FM10K_RRTIME_CFG 0x3808 +#define FM10K_RRTIME_LIMIT(_n) ((_n) + 0x380C) +#define FM10K_RRTIME_COUNT(_n) ((_n) + 0x3810) +#define FM10K_SYSTIME 0x3814 +#define FM10K_SYSTIME0 0x3816 +#define FM10K_SYSTIME_CFG 0x3818 +#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F + +/* PCIe state registers */ +#define FM10K_PFVFBME(_n) ((_n) + 0x381A) +#define FM10K_PHYADDR 0x381C + +/* Rx ring registers */ +#define FM10K_RDBAL(_n) ((0x40 * (_n)) + 0x4000) +#define FM10K_RDBAH(_n) ((0x40 * (_n)) + 0x4001) +#define FM10K_RDLEN(_n) ((0x40 * (_n)) + 0x4002) +#define FM10K_TPH_RXCTRL(_n) ((0x40 * (_n)) + 0x4003) +#define FM10K_TPH_RXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_RXCTRL_HDR_TPHEN 0x00000040 +#define FM10K_TPH_RXCTRL_DATA_TPHEN 0x00000080 +#define FM10K_TPH_RXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_RXCTRL_DATA_WROEN 0x00002000 +#define FM10K_TPH_RXCTRL_HDR_WROEN 0x00008000 +#define FM10K_RDH(_n) ((0x40 * (_n)) + 0x4004) +#define FM10K_RDT(_n) ((0x40 * (_n)) + 0x4005) +#define FM10K_RXQCTL(_n) ((0x40 * (_n)) + 0x4006) +#define FM10K_RXQCTL_ENABLE 0x00000001 +#define FM10K_RXQCTL_PF 0x000000FC +#define FM10K_RXQCTL_VF_SHIFT 2 +#define FM10K_RXQCTL_VF 0x00000100 +#define FM10K_RXQCTL_ID_MASK (FM10K_RXQCTL_PF | FM10K_RXQCTL_VF) +#define FM10K_RXDCTL(_n) ((0x40 * (_n)) + 0x4007) +#define FM10K_RXDCTL_WRITE_BACK_MIN_DELAY 0x00000001 +#define FM10K_RXDCTL_WRITE_BACK_IMM 0x00000100 +#define FM10K_RXDCTL_DROP_ON_EMPTY 0x00000200 +#define FM10K_RXINT(_n) ((0x40 * (_n)) + 0x4008) +#define FM10K_RXINT_TIMER_SHIFT 8 +#define FM10K_SRRCTL(_n) ((0x40 * (_n)) + 0x4009) +#define FM10K_SRRCTL_BSIZEPKT_SHIFT 8 /* shift _right_ */ +#define FM10K_SRRCTL_BSIZEHDR_SHIFT 2 /* shift _left_ */ +#define FM10K_SRRCTL_BSIZEHDR_MASK 0x00003F00 +#define FM10K_SRRCTL_DESCTYPE_HDR_SPLIT 0x00004000 +#define FM10K_SRRCTL_DESCTYPE_SIZE_SPLIT 0x00008000 +#define FM10K_SRRCTL_PSRTYPE_INNER_TCPHDR 0x00010000 +#define FM10K_SRRCTL_PSRTYPE_INNER_UDPHDR 0x00020000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV4HDR 0x00040000 +#define FM10K_SRRCTL_PSRTYPE_INNER_IPV6HDR 0x00080000 +#define FM10K_SRRCTL_PSRTYPE_INNER_L2HDR 0x00100000 +#define FM10K_SRRCTL_PSRTYPE_ENCAPHDR 0x00200000 +#define FM10K_SRRCTL_PSRTYPE_TCPHDR 0x00400000 +#define FM10K_SRRCTL_PSRTYPE_UDPHDR 0x00800000 +#define FM10K_SRRCTL_PSRTYPE_IPV4HDR 0x01000000 +#define FM10K_SRRCTL_PSRTYPE_IPV6HDR 0x02000000 +#define FM10K_SRRCTL_PSRTYPE_L2HDR 0x04000000 +#define FM10K_SRRCTL_LOOPBACK_SUPPRESS 0x40000000 +#define FM10K_SRRCTL_BUFFER_CHAINING_EN 0x80000000 + +/* Rx Statistics */ +#define FM10K_QPRC(_n) ((0x40 * (_n)) + 0x400A) +#define FM10K_QPRDC(_n) ((0x40 * (_n)) + 0x400B) +#define FM10K_QBRC_L(_n) ((0x40 * (_n)) + 0x400C) +#define FM10K_QBRC_H(_n) ((0x40 * (_n)) + 0x400D) + +/* Rx GLORT register */ +#define FM10K_RX_SGLORT(_n) ((0x40 * (_n)) + 0x400E) + +/* Tx ring registers */ +#define FM10K_TDBAL(_n) ((0x40 * (_n)) + 0x8000) +#define FM10K_TDBAH(_n) ((0x40 * (_n)) + 0x8001) +#define FM10K_TDLEN(_n) ((0x40 * (_n)) + 0x8002) +#define FM10K_TPH_TXCTRL(_n) ((0x40 * (_n)) + 0x8003) +#define FM10K_TPH_TXCTRL_DESC_TPHEN 0x00000020 +#define FM10K_TPH_TXCTRL_DESC_RROEN 0x00000200 +#define FM10K_TPH_TXCTRL_DESC_WROEN 0x00000800 +#define FM10K_TPH_TXCTRL_DATA_RROEN 0x00002000 +#define FM10K_TDH(_n) ((0x40 * (_n)) + 0x8004) +#define FM10K_TDT(_n) ((0x40 * (_n)) + 0x8005) +#define FM10K_TXDCTL(_n) ((0x40 * (_n)) + 0x8006) +#define FM10K_TXDCTL_ENABLE 0x00004000 +#define FM10K_TXDCTL_MAX_TIME_SHIFT 16 +#define FM10K_TXDCTL_PUSH_DESC 0x10000000 +#define FM10K_TXQCTL(_n) ((0x40 * (_n)) + 0x8007) +#define FM10K_TXQCTL_PF 0x0000003F +#define FM10K_TXQCTL_VF 0x00000040 +#define FM10K_TXQCTL_ID_MASK (FM10K_TXQCTL_PF | FM10K_TXQCTL_VF) +#define FM10K_TXQCTL_PC_SHIFT 7 +#define FM10K_TXQCTL_PC_MASK 0x00000380 +#define FM10K_TXQCTL_TC_SHIFT 10 +#define FM10K_TXQCTL_TC_MASK 0x0000FC00 +#define FM10K_TXQCTL_VID_SHIFT 16 +#define FM10K_TXQCTL_VID_MASK 0x0FFF0000 +#define FM10K_TXQCTL_UNLIMITED_BW 0x10000000 +#define FM10K_TXQCTL_PUSHMODEDIS 0x20000000 +#define FM10K_TXINT(_n) ((0x40 * (_n)) + 0x8008) +#define FM10K_TXINT_TIMER_SHIFT 8 + +/* Tx Statistics */ +#define FM10K_QPTC(_n) ((0x40 * (_n)) + 0x8009) +#define FM10K_QBTC_L(_n) ((0x40 * (_n)) + 0x800A) +#define FM10K_QBTC_H(_n) ((0x40 * (_n)) + 0x800B) + +/* Tx Push registers */ +#define FM10K_TQDLOC(_n) ((0x40 * (_n)) + 0x800C) +#define FM10K_TQDLOC_BASE_32_DESC 0x08 +#define FM10K_TQDLOC_BASE_64_DESC 0x10 +#define FM10K_TQDLOC_BASE_128_DESC 0x20 +#define FM10K_TQDLOC_SIZE_32_DESC 0x00050000 +#define FM10K_TQDLOC_SIZE_64_DESC 0x00060000 +#define FM10K_TQDLOC_SIZE_128_DESC 0x00070000 +#define FM10K_TQDLOC_SIZE_SHIFT 16 +#define FM10K_TX_DCACHE(_n, _m) ((0x400 * (_n)) + (0x4 * (_m)) + 0x40000) + +/* Tx GLORT registers */ +#define FM10K_TX_SGLORT(_n) ((0x40 * (_n)) + 0x800D) +#define FM10K_PFVTCTL(_n) ((0x40 * (_n)) + 0x800E) +#define FM10K_PFVTCTL_FTAG_DESC_ENABLE 0x00000001 + +/* Interrupt moderation and control registers */ +#define FM10K_PBACL(_n) ((_n) + 0x10000) +#define FM10K_INT_MAP(_n) ((_n) + 0x10080) +#define FM10K_INT_MAP_TIMER0 0x00000000 +#define FM10K_INT_MAP_TIMER1 0x00000100 +#define FM10K_INT_MAP_IMMEDIATE 0x00000200 +#define FM10K_INT_MAP_DISABLE 0x00000300 +#define FM10K_MSIX_VECTOR_ADDR_LO(_n) ((0x4 * (_n)) + 0x11000) +#define FM10K_MSIX_VECTOR_ADDR_HI(_n) ((0x4 * (_n)) + 0x11001) +#define FM10K_MSIX_VECTOR_DATA(_n) ((0x4 * (_n)) + 0x11002) +#define FM10K_MSIX_VECTOR_MASK(_n) ((0x4 * (_n)) + 0x11003) +#define FM10K_INT_CTRL 0x12000 +#define FM10K_INT_CTRL_ENABLEMODERATOR 0x00000400 +#define FM10K_ITR(_n) ((_n) + 0x12400) +#define FM10K_ITR_INTERVAL1_SHIFT 12 +#define FM10K_ITR_TIMER0_EXPIRED 0x01000000 +#define FM10K_ITR_TIMER1_EXPIRED 0x02000000 +#define FM10K_ITR_PENDING0 0x04000000 +#define FM10K_ITR_PENDING1 0x08000000 +#define FM10K_ITR_PENDING2 0x10000000 +#define FM10K_ITR_AUTOMASK 0x20000000 +#define FM10K_ITR_MASK_SET 0x40000000 +#define FM10K_ITR_MASK_CLEAR 0x80000000 +#define FM10K_ITR2(_n) ((0x2 * (_n)) + 0x12800) +#define FM10K_ITR2_LP(_n) ((0x2 * (_n)) + 0x12801) +#define FM10K_ITR_REG_COUNT 768 +#define FM10K_ITR_REG_COUNT_PF 256 + +/* Switch manager interrupt registers */ +#define FM10K_IP 0x13000 +#define FM10K_IP_HOT_RESET 0x00000001 +#define FM10K_IP_DEVICE_STATE_CHANGE 0x00000002 +#define FM10K_IP_MAILBOX 0x00000004 +#define FM10K_IP_VPD_REQUEST 0x00000008 +#define FM10K_IP_SRAMERROR 0x00000010 +#define FM10K_IP_PFLR 0x00000020 +#define FM10K_IP_DATAPATHRESET 0x00000040 +#define FM10K_IP_OUTOFRESET 0x00000080 +#define FM10K_IP_NOTINRESET 0x00000100 +#define FM10K_IP_TIMEOUT 0x00000200 +#define FM10K_IP_VFLR 0x00000400 +#define FM10K_IM 0x13001 +#define FM10K_IB 0x13002 +#define FM10K_SRAM_IP 0x13003 +#define FM10K_SRAM_IM 0x13004 + +/* VLAN registers */ +#define FM10K_VLAN_TABLE(_n, _m) ((0x80 * (_n)) + (_m) + 0x14000) +#define FM10K_VLAN_TABLE_SIZE 128 + +/* VLAN specific message offsets */ +#define FM10K_VLAN_TABLE_VID_MAX 4096 +#define FM10K_VLAN_TABLE_VSI_MAX 64 +#define FM10K_VLAN_LENGTH_SHIFT 16 +#define FM10K_VLAN_CLEAR (1 << 15) +#define FM10K_VLAN_ALL \ + ((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT) + +/* VF FLR event notification registers */ +#define FM10K_PFVFLRE(_n) ((0x1 * (_n)) + 0x18844) +#define FM10K_PFVFLREC(_n) ((0x1 * (_n)) + 0x18846) + +/* Defines for size of uncacheable and write-combining memories */ +#define FM10K_UC_ADDR_START 0x000000 /* start of standard regs */ +#define FM10K_WC_ADDR_START 0x100000 /* start of Tx Desc Cache */ +#define FM10K_DBI_ADDR_START 0x200000 /* start of debug registers */ +#define FM10K_UC_ADDR_SIZE (FM10K_WC_ADDR_START - FM10K_UC_ADDR_START) +#define FM10K_WC_ADDR_SIZE (FM10K_DBI_ADDR_START - FM10K_WC_ADDR_START) + +/* Define timeouts for resets and disables */ +#define FM10K_QUEUE_DISABLE_TIMEOUT 100 +#define FM10K_RESET_TIMEOUT 150 + +/* Maximum supported combined inner and outer header length for encapsulation */ +#define FM10K_TUNNEL_HEADER_LENGTH 184 + +/* VF registers */ +#define FM10K_VFCTRL 0x00000 +#define FM10K_VFCTRL_RST 0x00000008 +#define FM10K_VFINT_MAP 0x00030 +#define FM10K_VFSYSTIME 0x00040 +#define FM10K_VFITR(_n) ((_n) + 0x00060) +#define FM10K_VFPBACL(_n) ((_n) + 0x00008) + +/* Registers contained in BAR 4 for Switch management */ +#define FM10K_SW_SYSTIME_CFG 0x0224C +#define FM10K_SW_SYSTIME_CFG_STEP_SHIFT 4 +#define FM10K_SW_SYSTIME_CFG_ADJUST_MASK 0xFF000000 +#define FM10K_SW_SYSTIME_ADJUST 0x0224D +#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF +#define FM10K_SW_SYSTIME_ADJUST_DIR_NEGATIVE 0x80000000 +#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252) + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif /* ETH_ALEN */ + + + + +enum fm10k_int_source { + fm10k_int_Mailbox = 0, + fm10k_int_PCIeFault = 1, + fm10k_int_SwitchUpDown = 2, + fm10k_int_SwitchEvent = 3, + fm10k_int_SRAM = 4, + fm10k_int_VFLR = 5, + fm10k_int_MaxHoldTime = 6, + fm10k_int_sources_max_pf +}; + +/* PCIe bus speeds */ +enum fm10k_bus_speed { + fm10k_bus_speed_unknown = 0, + fm10k_bus_speed_2500 = 2500, + fm10k_bus_speed_5000 = 5000, + fm10k_bus_speed_8000 = 8000, + fm10k_bus_speed_reserved +}; + +/* PCIe bus widths */ +enum fm10k_bus_width { + fm10k_bus_width_unknown = 0, + fm10k_bus_width_pcie_x1 = 1, + fm10k_bus_width_pcie_x2 = 2, + fm10k_bus_width_pcie_x4 = 4, + fm10k_bus_width_pcie_x8 = 8, + fm10k_bus_width_reserved +}; + +/* PCIe payload sizes */ +enum fm10k_bus_payload { + fm10k_bus_payload_unknown = 0, + fm10k_bus_payload_128 = 1, + fm10k_bus_payload_256 = 2, + fm10k_bus_payload_512 = 3, + fm10k_bus_payload_reserved +}; + +/* Bus parameters */ +struct fm10k_bus_info { + enum fm10k_bus_speed speed; + enum fm10k_bus_width width; + enum fm10k_bus_payload payload; +}; + +/* Statistics related declarations */ +struct fm10k_hw_stat { + u64 count; + u32 base_l; + u32 base_h; +}; + +struct fm10k_hw_stats_q { + struct fm10k_hw_stat tx_bytes; + struct fm10k_hw_stat tx_packets; +#define tx_stats_idx tx_packets.base_h + struct fm10k_hw_stat rx_bytes; + struct fm10k_hw_stat rx_packets; +#define rx_stats_idx rx_packets.base_h + struct fm10k_hw_stat rx_drops; +}; + +struct fm10k_hw_stats { + struct fm10k_hw_stat timeout; +#define stats_idx timeout.base_h + struct fm10k_hw_stat ur; + struct fm10k_hw_stat ca; + struct fm10k_hw_stat um; + struct fm10k_hw_stat xec; + struct fm10k_hw_stat vlan_drop; + struct fm10k_hw_stat loopback_drop; + struct fm10k_hw_stat nodesc_drop; + struct fm10k_hw_stats_q q[FM10K_MAX_QUEUES_PF]; +}; + +/* Establish DGLORT feature priority */ +enum fm10k_dglortdec_idx { + fm10k_dglort_default = 0, + fm10k_dglort_vf_rsvd0 = 1, + fm10k_dglort_vf_rss = 2, + fm10k_dglort_pf_rsvd0 = 3, + fm10k_dglort_pf_queue = 4, + fm10k_dglort_pf_vsi = 5, + fm10k_dglort_pf_rsvd1 = 6, + fm10k_dglort_pf_rss = 7 +}; + +struct fm10k_dglort_cfg { + u16 glort; /* GLORT base */ + u16 queue_b; /* Base value for queue */ + u8 vsi_b; /* Base value for VSI */ + u8 idx; /* index of DGLORTDEC entry */ + u8 rss_l; /* RSS indices */ + u8 pc_l; /* Priority Class indices */ + u8 vsi_l; /* Number of bits from GLORT used to determine VSI */ + u8 queue_l; /* Number of bits from GLORT used to determine queue */ + u8 shared_l; /* Ignored bits from GLORT resulting in shared VSI */ + u8 inner_rss; /* Boolean value if inner header is used for RSS */ +}; + +enum fm10k_pca_fault { + PCA_NO_FAULT, + PCA_UNMAPPED_ADDR, + PCA_BAD_QACCESS_PF, + PCA_BAD_QACCESS_VF, + PCA_MALICIOUS_REQ, + PCA_POISONED_TLP, + PCA_TLP_ABORT, + __PCA_MAX +}; + +enum fm10k_thi_fault { + THI_NO_FAULT, + THI_MAL_DIS_Q_FAULT, + __THI_MAX +}; + +enum fm10k_fum_fault { + FUM_NO_FAULT, + FUM_UNMAPPED_ADDR, + FUM_POISONED_TLP, + FUM_BAD_VF_QACCESS, + FUM_ADD_DECODE_ERR, + FUM_RO_ERROR, + FUM_QPRC_CRC_ERROR, + FUM_CSR_TIMEOUT, + FUM_INVALID_TYPE, + FUM_INVALID_LENGTH, + FUM_INVALID_BE, + FUM_INVALID_ALIGN, + __FUM_MAX +}; + +struct fm10k_fault { + u64 address; /* Address at the time fault was detected */ + u32 specinfo; /* Extra info on this fault (fault dependent) */ + u8 type; /* Fault value dependent on subunit */ + u8 func; /* Function number of the fault */ +}; + +struct fm10k_mac_ops { + /* basic bring-up and tear-down */ + s32 (*reset_hw)(struct fm10k_hw *); + s32 (*init_hw)(struct fm10k_hw *); + s32 (*start_hw)(struct fm10k_hw *); + s32 (*stop_hw)(struct fm10k_hw *); + s32 (*get_bus_info)(struct fm10k_hw *); + s32 (*get_host_state)(struct fm10k_hw *, bool *); + bool (*is_slot_appropriate)(struct fm10k_hw *); + s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool); + s32 (*read_mac_addr)(struct fm10k_hw *); + s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *, + u16, bool, u8); + s32 (*update_mc_addr)(struct fm10k_hw *, u16, const u8 *, u16, bool); + s32 (*update_xcast_mode)(struct fm10k_hw *, u16, u8); + void (*update_int_moderator)(struct fm10k_hw *); + s32 (*update_lport_state)(struct fm10k_hw *, u16, u16, bool); + void (*update_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + void (*rebind_hw_stats)(struct fm10k_hw *, struct fm10k_hw_stats *); + s32 (*configure_dglort_map)(struct fm10k_hw *, + struct fm10k_dglort_cfg *); + void (*set_dma_mask)(struct fm10k_hw *, u64); + s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *); + void (*request_lport_map)(struct fm10k_hw *); + s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb); + u64 (*read_systime)(struct fm10k_hw *); + s32 (*request_tx_timestamp_mode)(struct fm10k_hw *, u16, u8); +}; + +enum fm10k_mac_type { + fm10k_mac_unknown = 0, + fm10k_mac_pf, + fm10k_mac_vf, + fm10k_num_macs +}; + +struct fm10k_mac_info { + struct fm10k_mac_ops ops; + enum fm10k_mac_type type; + u8 addr[ETH_ALEN]; + u8 perm_addr[ETH_ALEN]; + u16 default_vid; + u16 max_msix_vectors; + u16 max_queues; + bool vlan_override; + bool get_host_state; + bool tx_ready; + u32 dglort_map; +}; + +struct fm10k_swapi_table_info { + u32 used; + u32 avail; +}; + +struct fm10k_swapi_info { + u32 status; + struct fm10k_swapi_table_info mac; + struct fm10k_swapi_table_info nexthop; + struct fm10k_swapi_table_info ffu; +}; + +enum fm10k_xcast_modes { + FM10K_XCAST_MODE_ALLMULTI = 0, + FM10K_XCAST_MODE_MULTI = 1, + FM10K_XCAST_MODE_PROMISC = 2, + FM10K_XCAST_MODE_NONE = 3, + FM10K_XCAST_MODE_DISABLE = 4 +}; + +enum fm10k_timestamp_modes { + FM10K_TIMESTAMP_MODE_NONE = 0, + FM10K_TIMESTAMP_MODE_PEP_TO_PEP = 1, + FM10K_TIMESTAMP_MODE_PEP_TO_ANY = 2, +}; + +#define FM10K_VF_TC_MAX 100000 /* 100,000 Mb/s aka 100Gb/s */ +#define FM10K_VF_TC_MIN 1 /* 1 Mb/s is the slowest rate */ + +struct fm10k_vf_info { + /* mbx must be first field in struct unless all default IOV message + * handlers are redone as the assumption is that vf_info starts + * at the same offset as the mailbox + */ + struct fm10k_mbx_info mbx; /* PF side of VF mailbox */ + int rate; /* Tx BW cap as defined by OS */ + u16 glort; /* resource tag for this VF */ + u16 sw_vid; /* Switch API assigned VLAN */ + u16 pf_vid; /* PF assigned Default VLAN */ + u8 mac[ETH_ALEN]; /* PF Default MAC address */ + u8 vsi; /* VSI identifier */ + u8 vf_idx; /* which VF this is */ + u8 vf_flags; /* flags indicating what modes + * are supported for the port + */ +}; + +#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI) +#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI) +#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC) +#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF) +#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4) +#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode)) +#define FM10K_VF_FLAG_ENABLED_MODE_SHIFT 4 +#define FM10K_VF_FLAG_SET_MODE_MASK ((u8)0xF0) +#define FM10K_VF_FLAG_SET_MODE_NONE \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_NONE) +#define FM10K_VF_FLAG_MULTI_ENABLED \ + (FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_ALLMULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_MULTI) | \ + FM10K_VF_FLAG_SET_MODE(FM10K_XCAST_MODE_PROMISC)) + +struct fm10k_iov_ops { + /* IOV related bring-up and tear-down */ + s32 (*assign_resources)(struct fm10k_hw *, u16, u16); + s32 (*configure_tc)(struct fm10k_hw *, u16, int); + s32 (*assign_int_moderator)(struct fm10k_hw *, u16); + s32 (*assign_default_mac_vlan)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*reset_resources)(struct fm10k_hw *, + struct fm10k_vf_info *); + s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8); + void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *); + void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16); + s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64); +}; + +struct fm10k_iov_info { + struct fm10k_iov_ops ops; + u16 total_vfs; + u16 num_vfs; + u16 num_pools; +}; + +struct fm10k_hw { + u32 *hw_addr; + u32 *sw_addr; + void *back; + struct fm10k_mac_info mac; + struct fm10k_bus_info bus; + struct fm10k_bus_info bus_caps; + struct fm10k_iov_info iov; + struct fm10k_mbx_info mbx; + struct fm10k_swapi_info swapi; + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; +}; + +/* Number of Transmit and Receive Descriptors must be a multiple of 8 */ +#define FM10K_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define FM10K_REQ_RX_DESCRIPTOR_MULTIPLE 8 + +/* Transmit Descriptor */ +struct fm10k_tx_desc { + __le64 buffer_addr; /* Address of the descriptor's data buffer */ + __le16 buflen; /* Length of data to be DMAed */ + __le16 vlan; /* VLAN_ID and VPRI to be inserted in FTAG */ + __le16 mss; /* MSS for segmentation offload */ + u8 hdrlen; /* Header size for segmentation offload */ + u8 flags; /* Status and offload request flags */ +}; + +/* Transmit Descriptor Cache Structure */ +struct fm10k_tx_desc_cache { + struct fm10k_tx_desc tx_desc[256]; +}; + +#define FM10K_TXD_FLAG_INT 0x01 +#define FM10K_TXD_FLAG_TIME 0x02 +#define FM10K_TXD_FLAG_CSUM 0x04 +#define FM10K_TXD_FLAG_CSUM2 0x08 +#define FM10K_TXD_FLAG_FTAG 0x10 +#define FM10K_TXD_FLAG_RS 0x20 +#define FM10K_TXD_FLAG_LAST 0x40 +#define FM10K_TXD_FLAG_DONE 0x80 + +#define FM10K_TXD_VLAN_PRI_SHIFT 12 + +/* These macros are meant to enable optimal placement of the RS and INT + * bits. It will point us to the last descriptor in the cache for either the + * start of the packet, or the end of the packet. If the index is actually + * at the start of the FIFO it will point to the offset for the last index + * in the FIFO to prevent an unnecessary write. + */ +#define FM10K_TXD_WB_FIFO_SIZE 4 +#define FM10K_TXD_WB_IDX(idx) \ + (((idx) - 1) | (FM10K_TXD_WB_FIFO_SIZE - 1)) + +/* Receive Descriptor - 32B */ +union fm10k_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 reserved; /* Empty space, RSS hash */ + __le64 timestamp; + } q; /* Read, Writeback, 64b quad-words */ + struct { + __le32 data; /* RSS and header data */ + __le32 rss; /* RSS Hash */ + __le32 staterr; + __le32 vlan_len; + __le32 glort; /* sglort/dglort */ + } d; /* Writeback, 32b double-words */ + struct { + __le16 pkt_info; /* RSS, Pkt type */ + __le16 hdr_info; /* Splithdr, hdrlen, xC */ + __le16 rss_lower; + __le16 rss_upper; + __le16 status; /* status/error */ + __le16 csum_err; /* checksum or extended error value */ + __le16 length; /* Packet length */ + __le16 vlan; /* VLAN tag */ + __le16 dglort; + __le16 sglort; + } w; /* Writeback, 16b words */ +}; + +#define FM10K_RXD_RSSTYPE_MASK 0x000F +enum fm10k_rdesc_rss_type { + FM10K_RSSTYPE_NONE = 0x0, + FM10K_RSSTYPE_IPV4_TCP = 0x1, + FM10K_RSSTYPE_IPV4 = 0x2, + FM10K_RSSTYPE_IPV6_TCP = 0x3, + /* Reserved 0x4 */ + FM10K_RSSTYPE_IPV6 = 0x5, + /* Reserved 0x6 */ + FM10K_RSSTYPE_IPV4_UDP = 0x7, + FM10K_RSSTYPE_IPV6_UDP = 0x8 + /* Reserved 0x9 - 0xF */ +}; + +#define FM10K_RXD_PKTTYPE_MASK 0x03F0 +#define FM10K_RXD_PKTTYPE_MASK_L3 0x0070 +#define FM10K_RXD_PKTTYPE_MASK_L4 0x0380 +#define FM10K_RXD_PKTTYPE_SHIFT 4 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L3 0x1C00 +#define FM10K_RXD_PKTTYPE_INNER_MASK_L4 0xE000 +#define FM10K_RXD_PKTTYPE_INNER_SHIFT 10 +enum fm10k_rdesc_pkt_type { + /* L3 type */ + FM10K_PKTTYPE_OTHER = 0x00, + FM10K_PKTTYPE_IPV4 = 0x01, + FM10K_PKTTYPE_IPV4_EX = 0x02, + FM10K_PKTTYPE_IPV6 = 0x03, + FM10K_PKTTYPE_IPV6_EX = 0x04, + + /* L4 type */ + FM10K_PKTTYPE_TCP = 0x08, + FM10K_PKTTYPE_UDP = 0x10, + FM10K_PKTTYPE_GRE = 0x18, + FM10K_PKTTYPE_VXLAN = 0x20, + FM10K_PKTTYPE_NVGRE = 0x28, + FM10K_PKTTYPE_GENEVE = 0x30 +}; + +#define FM10K_RXD_HDR_INFO_XC_MASK 0x0006 +enum fm10k_rxdesc_xc { + FM10K_XC_UNICAST = 0x0, + FM10K_XC_MULTICAST = 0x4, + FM10K_XC_BROADCAST = 0x6 +}; + +#define FM10K_RXD_HDR_INFO_LEN_SHIFT 5 +#define FM10K_RXD_HDR_INFO_SPH 0x8000 + +#define FM10K_RXD_STATUS_DD 0x0001 /* Descriptor done */ +#define FM10K_RXD_STATUS_EOP 0x0002 /* End of packet */ +#define FM10K_RXD_STATUS_VEXT 0x0004 /* A VLAN tag is present */ +#define FM10K_RXD_STATUS_IPCS 0x0008 /* Indicates IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS 0x0010 /* Indicates an L4 csum */ +#define FM10K_RXD_STATUS_IPCS2 0x0020 /* Inner header IPv4 csum */ +#define FM10K_RXD_STATUS_L4CS2 0x0040 /* Inner header L4 csum */ +#define FM10K_RXD_STATUS_IPFRAG_MASK 0x0180 /* Fragment mask */ +#define FM10K_RXD_STATUS_IPFRAG_CSUM 0x0100 /* Fragment w/ CSUM field */ +#define FM10K_RXD_STATUS_VEXT2 0x0200 /* A custom tag is present */ +#define FM10K_RXD_STATUS_HBO 0x0400 /* header buffer overrun */ +#define FM10K_RXD_STATUS_L4E2 0x0800 /* Inner header L4 csum err */ +#define FM10K_RXD_STATUS_IPE2 0x1000 /* Inner header IPv4 csum err */ +#define FM10K_RXD_STATUS_RXE 0x2000 /* Generic Rx error */ +#define FM10K_RXD_STATUS_L4E 0x4000 /* L4 csum error */ +#define FM10K_RXD_STATUS_IPE 0x8000 /* IPv4 csum error */ + +#define FM10K_RXD_ERR_SWITCH_ERROR 0x0001 /* Switch found bad packet */ +#define FM10K_RXD_ERR_NO_DESCRIPTOR 0x0002 /* No descriptor available */ +#define FM10K_RXD_ERR_PP_ERROR 0x0004 /* RAM error during processing */ +#define FM10K_RXD_ERR_SWITCH_READY 0x0008 /* Link transition mid-packet */ +#define FM10K_RXD_ERR_TOO_BIG 0x0010 /* Pkt too big for single buf */ + +#define FM10K_RXD_VLAN_ID_MASK 0x0FFF +#define FM10K_RXD_VLAN_PRI_SHIFT FM10K_TXD_VLAN_PRI_SHIFT + +struct fm10k_ftag { + __be16 swpri_type_user; + __be16 vlan; + __be16 sglort; + __be16 dglort; +}; + +#endif /* _FM10K_TYPE_H */ diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.c b/lib/librte_pmd_fm10k/base/fm10k_vf.c new file mode 100644 index 0000000..2246688 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.c @@ -0,0 +1,641 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "fm10k_vf.h" + +/** + * fm10k_stop_hw_vf - Stop Tx/Rx units + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_stop_hw_vf(struct fm10k_hw *hw) +{ + u8 *perm_addr = hw->mac.perm_addr; + u32 bal = 0, bah = 0; + s32 err; + u16 i; + + DEBUGFUNC("fm10k_stop_hw_vf"); + + /* we need to disable the queues before taking further steps */ + err = fm10k_stop_hw_generic(hw); + if (err) + return err; + + /* If permanent address is set then we need to restore it */ + if (FM10K_IS_VALID_ETHER_ADDR(perm_addr)) { + bal = (((u32)perm_addr[3]) << 24) | + (((u32)perm_addr[4]) << 16) | + (((u32)perm_addr[5]) << 8); + bah = (((u32)0xFF) << 24) | + (((u32)perm_addr[0]) << 16) | + (((u32)perm_addr[1]) << 8) | + ((u32)perm_addr[2]); + } + + /* The queues have already been disabled so we just need to + * update their base address registers + */ + for (i = 0; i < hw->mac.max_queues; i++) { + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), bah); + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), bal); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), bah); + } + + return FM10K_SUCCESS; +} + +/** + * fm10k_reset_hw_vf - VF hardware reset + * @hw: pointer to hardware structure + * + * This function should return the hardware to a state similar to the + * one it is in after just being initialized. + **/ +STATIC s32 fm10k_reset_hw_vf(struct fm10k_hw *hw) +{ + s32 err; + + DEBUGFUNC("fm10k_reset_hw_vf"); + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_stop_hw_vf(hw); + if (err) + return err; + + /* Inititate VF reset */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, FM10K_VFCTRL_RST); + + /* Flush write and allow 100us for reset to complete */ + FM10K_WRITE_FLUSH(hw); + usec_delay(FM10K_RESET_TIMEOUT); + + /* Clear reset bit and verify it was cleared */ + FM10K_WRITE_REG(hw, FM10K_VFCTRL, 0); + if (FM10K_READ_REG(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST) + err = FM10K_ERR_RESET_FAILED; + + return err; +} + +/** + * fm10k_init_hw_vf - VF hardware initialization + * @hw: pointer to hardware structure + * + **/ +STATIC s32 fm10k_init_hw_vf(struct fm10k_hw *hw) +{ + u32 tqdloc, tqdloc0 = ~FM10K_READ_REG(hw, FM10K_TQDLOC(0)); + s32 err; + u16 i; + + DEBUGFUNC("fm10k_init_hw_vf"); + + /* assume we always have at least 1 queue */ + for (i = 1; tqdloc0 && (i < FM10K_MAX_QUEUES_POOL); i++) { + /* verify the Descriptor cache offsets are increasing */ + tqdloc = ~FM10K_READ_REG(hw, FM10K_TQDLOC(i)); + if (!tqdloc || (tqdloc == tqdloc0)) + break; + + /* check to verify the PF doesn't own any of our queues */ + if (!~FM10K_READ_REG(hw, FM10K_TXQCTL(i)) || + !~FM10K_READ_REG(hw, FM10K_RXQCTL(i))) + break; + } + + /* shut down queues we own and reset DMA configuration */ + err = fm10k_disable_queues_generic(hw, i); + if (err) + return err; + + /* record maximum queue count */ + hw->mac.max_queues = i; + + /* fetch default VLAN */ + hw->mac.default_vid = (FM10K_READ_REG(hw, FM10K_TXQCTL(0)) & + FM10K_TXQCTL_VID_MASK) >> FM10K_TXQCTL_VID_SHIFT; + + return FM10K_SUCCESS; +} + +/** + * fm10k_is_slot_appropriate_vf - Indicate appropriate slot for this SKU + * @hw: pointer to hardware structure + * + * Looks at the PCIe bus info to confirm whether or not this slot can support + * the necessary bandwidth for this device. Since the VF has no control over + * the "slot" it is in, always indicate that the slot is appropriate. + **/ +STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_is_slot_appropriate_vf"); + + return TRUE; +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = { + FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN), + FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_DEFAULT_MAC), + FM10K_TLV_ATTR_MAC_ADDR(FM10K_MAC_VLAN_MSG_MULTICAST), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_update_vlan_vf - Update status of VLAN ID in VLAN filter table + * @hw: pointer to hardware structure + * @vid: VLAN ID to add to table + * @vsi: Reserved, should always be 0 + * @set: Indicates if this is a set or clear operation + * + * This function adds or removes the corresponding VLAN ID from the VLAN + * filter table for this VF. + **/ +STATIC s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[4]; + + /* verify the index is not set */ + if (vsi) + return FM10K_ERR_PARAM; + + /* verify upper 4 bits of vid and length are 0 */ + if ((vid << 16 | vid) >> 28) + return FM10K_ERR_PARAM; + + /* encode set bit into the VLAN ID */ + if (!set) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_u32(msg, FM10K_MAC_VLAN_MSG_VLAN, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_msg_mac_vlan_vf - Read device MAC address from mailbox message + * @hw: pointer to the HW structure + * @results: Attributes for message + * @mbx: unused mailbox data + * + * This function should determine the MAC address for the VF + **/ +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + u8 perm_addr[ETH_ALEN]; + u16 vid; + s32 err; + + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_mac_vlan_vf"); + + /* record MAC address requested */ + err = fm10k_tlv_attr_get_mac_vlan( + results[FM10K_MAC_VLAN_MSG_DEFAULT_MAC], + perm_addr, &vid); + if (err) + return err; + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1); + hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR); + + return FM10K_SUCCESS; +} + +/** + * fm10k_read_mac_addr_vf - Read device MAC address + * @hw: pointer to the HW structure + * + * This function should determine the MAC address for the VF + **/ +STATIC s32 fm10k_read_mac_addr_vf(struct fm10k_hw *hw) +{ + u8 perm_addr[ETH_ALEN]; + u32 base_addr; + + DEBUGFUNC("fm10k_read_mac_addr_vf"); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAL(0)); + + /* last byte should be 0 */ + if (base_addr << 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[3] = (u8)(base_addr >> 24); + perm_addr[4] = (u8)(base_addr >> 16); + perm_addr[5] = (u8)(base_addr >> 8); + + base_addr = FM10K_READ_REG(hw, FM10K_TDBAH(0)); + + /* first byte should be all 1's */ + if ((~base_addr) >> 24) + return FM10K_ERR_INVALID_MAC_ADDR; + + perm_addr[0] = (u8)(base_addr >> 16); + perm_addr[1] = (u8)(base_addr >> 8); + perm_addr[2] = (u8)(base_addr); + + memcpy(hw->mac.perm_addr, perm_addr, ETH_ALEN); + memcpy(hw->mac.addr, perm_addr, ETH_ALEN); + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_uc_addr_vf - Update device unicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * @flags: flags field to indicate add and secure - unused + * + * This function is used to add or remove unicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_uc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add, u8 flags) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_2PARAMETER(glort, flags); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify MAC address is valid */ + if (!FM10K_IS_VALID_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* verify we are not locked down on the MAC address */ + if (FM10K_IS_VALID_ETHER_ADDR(hw->mac.perm_addr) && + memcmp(hw->mac.perm_addr, mac, ETH_ALEN)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MAC, mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_mc_addr_vf - Update device multicast addresses + * @hw: pointer to the HW structure + * @glort: unused + * @mac: MAC address to add/remove from table + * @vid: VLAN ID to add/remove from table + * @add: Indicates if this is an add or remove operation + * + * This function is used to add or remove multicast MAC addresses for + * the VF. + **/ +STATIC s32 fm10k_update_mc_addr_vf(struct fm10k_hw *hw, u16 glort, + const u8 *mac, u16 vid, bool add) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[7]; + + DEBUGFUNC("fm10k_update_uc_addr_vf"); + + UNREFERENCED_1PARAMETER(glort); + + /* verify VLAN ID is valid */ + if (vid >= FM10K_VLAN_TABLE_VID_MAX) + return FM10K_ERR_PARAM; + + /* verify multicast address is valid */ + if (!FM10K_IS_MULTICAST_ETHER_ADDR(mac)) + return FM10K_ERR_PARAM; + + /* add bit to notify us if this is a set or clear operation */ + if (!add) + vid |= FM10K_VLAN_CLEAR; + + /* generate VLAN request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MAC_VLAN); + fm10k_tlv_attr_put_mac_vlan(msg, FM10K_MAC_VLAN_MSG_MULTICAST, + mac, vid); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_int_moderator_vf - Request update of interrupt moderator list + * @hw: pointer to hardware structure + * + * This function will issue a request to the PF to rescan our MSI-X table + * and to update the interrupt moderator linked list. + **/ +STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[1]; + + /* generate MSI-X request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_MSIX); + + /* load onto outgoing mailbox */ + mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/* This structure defines the attibutes to be parsed below */ +const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = { + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE), + FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE), + FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_READY), + FM10K_TLV_ATTR_LAST +}; + +/** + * fm10k_msg_lport_state_vf - Message handler for lport_state message from PF + * @hw: Pointer to hardware structure + * @results: pointer array containing parsed data + * @mbx: Pointer to mailbox information structure + * + * This handler is meant to capture the indication from the PF that we + * are ready to bring up the interface. + **/ +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *hw, u32 **results, + struct fm10k_mbx_info *mbx) +{ + UNREFERENCED_1PARAMETER(mbx); + DEBUGFUNC("fm10k_msg_lport_state_vf"); + + hw->mac.dglort_map = !results[FM10K_LPORT_STATE_MSG_READY] ? + FM10K_DGLORTMAP_NONE : FM10K_DGLORTMAP_ZERO; + + return FM10K_SUCCESS; +} + +/** + * fm10k_update_lport_state_vf - Update device state in lower device + * @hw: pointer to the HW structure + * @glort: unused + * @count: number of logical ports to enable - unused (always 1) + * @enable: boolean value indicating if this is an enable or disable request + * + * Notify the lower device of a state change. If the lower device is + * enabled we can add filters, if it is disabled all filters for this + * logical port are flushed. + **/ +STATIC s32 fm10k_update_lport_state_vf(struct fm10k_hw *hw, u16 glort, + u16 count, bool enable) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[2]; + + UNREFERENCED_2PARAMETER(glort, count); + DEBUGFUNC("fm10k_update_lport_state_vf"); + + /* reset glort mask 0 as we have to wait to be enabled */ + hw->mac.dglort_map = FM10K_DGLORTMAP_NONE; + + /* generate port state request */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + if (!enable) + fm10k_tlv_attr_put_bool(msg, FM10K_LPORT_STATE_MSG_DISABLE); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +/** + * fm10k_update_xcast_mode_vf - Request update of multicast mode + * @hw: pointer to hardware structure + * @glort: unused + * @mode: integer value indicating mode being requested + * + * This function will attempt to request a higher mode for the port + * so that it can enable either multicast, multicast promiscuous, or + * promiscuous mode of operation. + **/ +STATIC s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode) +{ + struct fm10k_mbx_info *mbx = &hw->mbx; + u32 msg[3]; + + UNREFERENCED_1PARAMETER(glort); + DEBUGFUNC("fm10k_update_xcast_mode_vf"); + + if (mode > FM10K_XCAST_MODE_NONE) + return FM10K_ERR_PARAM; + + /* generate message requesting to change xcast mode */ + fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_LPORT_STATE); + fm10k_tlv_attr_put_u8(msg, FM10K_LPORT_STATE_MSG_XCAST_MODE, mode); + + /* load onto outgoing mailbox */ + return mbx->ops.enqueue_tx(hw, mbx, msg); +} + +const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = { + FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP), + FM10K_TLV_ATTR_LAST +}; + +/* currently there is no shared 1588 timestamp handler */ + +/** + * fm10k_update_hw_stats_vf - Updates hardware related statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to statistics structure + * + * This function collects and aggregates per queue hardware statistics. + **/ +STATIC void fm10k_update_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_update_hw_stats_vf"); + + fm10k_update_hw_stats_q(hw, stats->q, 0, hw->mac.max_queues); +} + +/** + * fm10k_rebind_hw_stats_vf - Resets base for hardware statistics of VF + * @hw: pointer to hardware structure + * @stats: pointer to the stats structure to update + * + * This function resets the base for queue hardware statistics. + **/ +STATIC void fm10k_rebind_hw_stats_vf(struct fm10k_hw *hw, + struct fm10k_hw_stats *stats) +{ + DEBUGFUNC("fm10k_rebind_hw_stats_vf"); + + /* Unbind Queue Statistics */ + fm10k_unbind_hw_stats_q(stats->q, 0, hw->mac.max_queues); + + /* Reinitialize bases for all stats */ + fm10k_update_hw_stats_vf(hw, stats); +} + +/** + * fm10k_configure_dglort_map_vf - Configures GLORT entry and queues + * @hw: pointer to hardware structure + * @dglort: pointer to dglort configuration structure + * + * Reads the configuration structure contained in dglort_cfg and uses + * that information to then populate a DGLORTMAP/DEC entry and the queues + * to which it has been assigned. + **/ +STATIC s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw, + struct fm10k_dglort_cfg *dglort) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_configure_dglort_map_vf"); + + /* verify the dglort pointer */ + if (!dglort) + return FM10K_ERR_PARAM; + + /* stub for now until we determine correct message for this */ + + return FM10K_SUCCESS; +} + +/** + * fm10k_adjust_systime_vf - Adjust systime frequency + * @hw: pointer to hardware structure + * @ppb: adjustment rate in parts per billion + * + * This function takes an adjustment rate in parts per billion and will + * verify that this value is 0 as the VF cannot support adjusting the + * systime clock. + * + * If the ppb value is non-zero the return is ERR_PARAM else success + **/ +STATIC s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb) +{ + UNREFERENCED_1PARAMETER(hw); + DEBUGFUNC("fm10k_adjust_systime_vf"); + + /* The VF cannot adjust the clock frequency, however it should + * already have a syntonic clock with whichever host interface is + * running as the master for the host interface clock domain so + * there should be not frequency adjustment necessary. + */ + return ppb ? FM10K_ERR_PARAM : FM10K_SUCCESS; +} + +/** + * fm10k_read_systime_vf - Reads value of systime registers + * @hw: pointer to the hardware structure + * + * Function reads the content of 2 registers, combined to represent a 64 bit + * value measured in nanoseconds. In order to guarantee the value is accurate + * we check the 32 most significant bits both before and after reading the + * 32 least significant bits to verify they didn't change as we were reading + * the registers. + **/ +static u64 fm10k_read_systime_vf(struct fm10k_hw *hw) +{ + u32 systime_l, systime_h, systime_tmp; + + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + + do { + systime_tmp = systime_h; + systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME); + systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1); + } while (systime_tmp != systime_h); + + return ((u64)systime_h << 32) | systime_l; +} + +static const struct fm10k_msg_data fm10k_msg_data_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/** + * fm10k_init_ops_vf - Inits func ptrs and MAC type + * @hw: pointer to hardware structure + * + * Initialize the function pointers and assign the MAC type for VF. + * Does not touch the hardware. + **/ +s32 fm10k_init_ops_vf(struct fm10k_hw *hw) +{ + struct fm10k_mac_info *mac = &hw->mac; + + DEBUGFUNC("fm10k_init_ops_vf"); + + fm10k_init_ops_generic(hw); + + mac->ops.reset_hw = &fm10k_reset_hw_vf; + mac->ops.init_hw = &fm10k_init_hw_vf; + mac->ops.start_hw = &fm10k_start_hw_generic; + mac->ops.stop_hw = &fm10k_stop_hw_vf; + mac->ops.is_slot_appropriate = &fm10k_is_slot_appropriate_vf; + mac->ops.update_vlan = &fm10k_update_vlan_vf; + mac->ops.read_mac_addr = &fm10k_read_mac_addr_vf; + mac->ops.update_uc_addr = &fm10k_update_uc_addr_vf; + mac->ops.update_mc_addr = &fm10k_update_mc_addr_vf; + mac->ops.update_xcast_mode = &fm10k_update_xcast_mode_vf; + mac->ops.update_int_moderator = &fm10k_update_int_moderator_vf; + mac->ops.update_lport_state = &fm10k_update_lport_state_vf; + mac->ops.update_hw_stats = &fm10k_update_hw_stats_vf; + mac->ops.rebind_hw_stats = &fm10k_rebind_hw_stats_vf; + mac->ops.configure_dglort_map = &fm10k_configure_dglort_map_vf; + mac->ops.get_host_state = &fm10k_get_host_state_generic; + mac->ops.adjust_systime = &fm10k_adjust_systime_vf; + mac->ops.read_systime = &fm10k_read_systime_vf, + + mac->max_msix_vectors = fm10k_get_pcie_msix_count_generic(hw); + + return fm10k_pfvf_mbx_init(hw, &hw->mbx, fm10k_msg_data_vf, 0); +} diff --git a/lib/librte_pmd_fm10k/base/fm10k_vf.h b/lib/librte_pmd_fm10k/base/fm10k_vf.h new file mode 100644 index 0000000..0438542 --- /dev/null +++ b/lib/librte_pmd_fm10k/base/fm10k_vf.h @@ -0,0 +1,91 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2015, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _FM10K_VF_H_ +#define _FM10K_VF_H_ + +#include "fm10k_type.h" +#include "fm10k_common.h" + +enum fm10k_vf_tlv_msg_id { + FM10K_VF_MSG_ID_TEST = 0, /* msg ID reserved for testing */ + FM10K_VF_MSG_ID_MSIX, + FM10K_VF_MSG_ID_MAC_VLAN, + FM10K_VF_MSG_ID_LPORT_STATE, + FM10K_VF_MSG_ID_1588, + FM10K_VF_MSG_ID_MAX, +}; + +enum fm10k_tlv_mac_vlan_attr_id { + FM10K_MAC_VLAN_MSG_VLAN, + FM10K_MAC_VLAN_MSG_SET, + FM10K_MAC_VLAN_MSG_MAC, + FM10K_MAC_VLAN_MSG_DEFAULT_MAC, + FM10K_MAC_VLAN_MSG_MULTICAST, + FM10K_MAC_VLAN_MSG_ID_MAX +}; + +enum fm10k_tlv_lport_state_attr_id { + FM10K_LPORT_STATE_MSG_DISABLE, + FM10K_LPORT_STATE_MSG_XCAST_MODE, + FM10K_LPORT_STATE_MSG_READY, + FM10K_LPORT_STATE_MSG_MAX +}; + +enum fm10k_tlv_1588_attr_id { + FM10K_1588_MSG_TIMESTAMP, + FM10K_1588_MSG_MAX +}; + +#define FM10K_VF_MSG_MSIX_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func) + +s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[]; +#define FM10K_VF_MSG_MAC_VLAN_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MAC_VLAN, \ + fm10k_mac_vlan_msg_attr, func) + +s32 fm10k_msg_lport_state_vf(struct fm10k_hw *, u32 **, + struct fm10k_mbx_info *); +extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[]; +#define FM10K_VF_MSG_LPORT_STATE_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \ + fm10k_lport_state_msg_attr, func) + +extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[]; +#define FM10K_VF_MSG_1588_HANDLER(func) \ + FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func) + +s32 fm10k_init_ops_vf(struct fm10k_hw *hw); +#endif /* _FM10K_VF_H */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 02/16] eal: add fm10k device id 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 01/16] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 03/16] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) ` (14 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 03/16] fm10k: register fm10k pmd PF driver 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 01/16] fm10k: add base driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 02/16] eal: add fm10k device id Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 04/16] config: change config files to add fm10k into compile Chen Jing D(Mark) ` (13 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add init function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. 4. Add fm10k.h to define macros and basic data structure. 5. Add fm10k_logs.h to control log message output. 6. Add Makefile. 7. Add ABI version of librte_pmd_fm10k Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> Signed-off-by: Michael Qiu <michael.qiu@intel.com> --- lib/librte_pmd_fm10k/Makefile | 99 +++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 343 ++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 ++++++ lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + 5 files changed, 748 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..b24cc67 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,99 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +EXPORT_MAP := rte_pmd_fm10k_version.map + +LIBABIVER := 1 + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/base/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..1468040 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "base/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /**< don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..0b75299 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,343 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "base/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_INIT_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + + PMD_INIT_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_INIT_LOG(ERR, "Failed to register mailbox handler.err:%d", + err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_INIT_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..febd796 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_FM10K_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _FM10K_LOGS_H_ */ diff --git a/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map b/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map new file mode 100644 index 0000000..ef35398 --- /dev/null +++ b/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map @@ -0,0 +1,4 @@ +DPDK_2.0 { + + local: *; +}; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 04/16] config: change config files to add fm10k into compile 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 03/16] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 05/16] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (12 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. 2. Change lib/Makefile to add fm10k driver into compile list. 3. Change mk/rte.app.mk to add fm10k lib into link. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 11 +++++++++++ config/common_linuxapp | 11 +++++++++++ lib/Makefile | 1 + mk/rte.app.mk | 4 ++++ 4 files changed, 27 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 57bacb8..8cfa4e6 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index d428f84..db8332d 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/lib/Makefile b/lib/Makefile index d617d81..561a696 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -44,6 +44,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 95dbb0b..d181eb1 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 05/16] fm10k: add reta update/requery functions 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 04/16] config: change config files to add fm10k into compile Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 06/16] fm10k: add Rx queue setup/release function Chen Jing D(Mark) ` (11 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 162 +++++++++++++++++++++++++++++++++++ 1 files changed, 162 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0b75299..b3d4d79 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_INIT_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -119,6 +139,144 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -165,6 +323,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .link_update = fm10k_link_update, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 06/16] fm10k: add Rx queue setup/release function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 05/16] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 07/16] fm10k: add Tx " Chen Jing D(Mark) ` (10 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 254 +++++++++++++++++++++++++++++++++++ 1 files changed, 254 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b3d4d79..8799c1a 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -186,6 +227,217 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. */ + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %"PRIu32", " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, (uint32_t)FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -325,6 +577,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 07/16] fm10k: add Tx queue setup/release function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 06/16] fm10k: add Rx queue setup/release function Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 08/16] fm10k: add Rx/Tx single queue start/stop function Chen Jing D(Mark) ` (9 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 205 +++++++++++++++++++++++++++++++++++ 1 files changed, 205 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 8799c1a..47bfe59 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -438,6 +480,167 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %"PRIu32", " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, (uint32_t)FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -579,6 +782,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 08/16] fm10k: add Rx/Tx single queue start/stop function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 07/16] fm10k: add Tx " Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 09/16] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (8 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1) Address is 512B aligned 2) Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 58 +++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 222 +++++++++++++++++++++++++++++++++++ 2 files changed, 280 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 1468040..b23a3a6 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,62 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + PMD_INIT_LOG(ERR, "Error: Invalid buffer alignment!"); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 47bfe59..0fb7b95 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_INIT_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,118 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_INIT_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_INIT_LOG(ERR, "Invalid buffer address alignment :" + " %d", err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -780,6 +998,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 09/16] fm10k: add dev start/stop functions 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 08/16] fm10k: add Rx/Tx single queue start/stop function Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 10/16] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (7 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize RX queues. 2. Add function to initialize TX queues. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. 4. Add function to close mailbox service. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 224 +++++++++++++++++++++++++++++++++++ 1 files changed, 224 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0fb7b95..b79badc 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,11 +44,14 @@ #define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +#define UINT64_LOWER_32BITS_MASK 0x00000000ffffffffULL /* Number of chars per uint32 type */ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +271,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -379,6 +474,125 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_INIT_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -992,8 +1206,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 10/16] fm10k: add receive and tranmit function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 09/16] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 11/16] fm10k: add PF RSS support Chen Jing D(Mark) ` (6 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. 3. Change Makefile to compile new file fm10k_rxtx.c Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 1 + lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 317 +++++++++++++++++++++++++++++++++++ 4 files changed, 327 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index b24cc67..986f4ef 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -83,6 +83,7 @@ VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index b23a3a6..b2ff10e 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -279,4 +279,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b79badc..7451a44 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1244,6 +1244,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_INIT_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..022bfe6 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,317 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "base/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| TIME TAG |"); + PMD_RX_LOG(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if (unlikely((d->d.staterr & + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE))) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if (unlikely((d->d.staterr & + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E))) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (unlikely(d->d.staterr & FM10K_RXD_STATUS_HBO)) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE)) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + int ret; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + ret = rte_mempool_get_bulk(q->mp, + (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + + if (unlikely(ret != 0)) { + uint8_t port = q->port_id; + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); + /* + * Need to restore next_dd if we cannot allocate new + * buffers to replenish the old ones. + */ + q->next_dd = (q->next_dd + q->nb_desc - count) % + q->nb_desc; + rte_eth_devices[port].data->rx_mbuf_alloc_failed++; + return 0; + } + + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitly check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (likely(count > 0)) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 11/16] fm10k: add PF RSS support 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 10/16] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 12/16] fm10k: add scatter receive function Chen Jing D(Mark) ` (5 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 7451a44..0f4d339 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -270,6 +270,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not" + "supported", hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -359,6 +431,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1164,6 +1238,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1232,6 +1386,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 12/16] fm10k: add scatter receive function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 11/16] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 13/16] fm10k: add function to set vlan Chen Jing D(Mark) ` (4 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 145 +++++++++++++++++++++++++++++++++++ 3 files changed, 163 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index b2ff10e..0e31796 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -284,6 +284,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0f4d339..923f23c 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -423,6 +423,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -431,6 +438,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1403,6 +1415,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 022bfe6..bf1c537 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -195,6 +195,151 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + int ret; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + ret = rte_mempool_get_bulk(q->mp, + (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + + if (unlikely(ret != 0)) { + uint8_t port = q->port_id; + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); + /* + * Need to restore next_dd if we cannot allocate new + * buffers to replenish the old ones. + */ + q->next_dd = (q->next_dd + q->nb_desc - count) % + q->nb_desc; + rte_eth_devices[port].data->rx_mbuf_alloc_failed++; + return 0; + } + + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 13/16] fm10k: add function to set vlan 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 12/16] fm10k: add scatter receive function Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 14/16] fm10k: add SRIOV-VF support Chen Jing D(Mark) ` (3 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 923f23c..12394e5 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -787,6 +787,20 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1388,6 +1402,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 14/16] fm10k: add SRIOV-VF support 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 13/16] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 15/16] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) ` (2 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translate the request to correct units. For some functionalities that are unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 12394e5..8261fe8 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1562,6 +1562,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 15/16] fm10k: add PF and VF interrupt handling function 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 14/16] fm10k: add SRIOV-VF support Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 16/16] maintainers: claim for fm10k review Chen Jing D(Mark) 2015-02-18 0:13 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Thomas Monjalon 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add functions to enable PF/VF interrupt. 2. Add function to process error message passed from interrupt. 2. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 8261fe8..38f6925 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1344,6 +1344,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 available, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_INIT_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_INIT_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_INIT_LOG(INFO, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1524,6 +1774,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1553,6 +1818,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v6 16/16] maintainers: claim for fm10k review 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (14 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 15/16] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-02-17 14:18 ` Chen Jing D(Mark) 2015-02-18 0:13 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Thomas Monjalon 16 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-17 14:18 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> Claim for fm10k polling mode driver review. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- MAINTAINERS | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index a771fa3..e7a425b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -213,6 +213,10 @@ Intel i40e M: Helin Zhang <helin.zhang@intel.com> F: lib/librte_pmd_i40e/ +Intel fm10k +M: Jing Chen <jing.d.chen@intel.com> +F: lib/librte_pmd_fm10k/ + RedHat virtio M: Changchun Ouyang <changchun.ouyang@intel.com> F: lib/librte_pmd_virtio/ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (15 preceding siblings ...) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 16/16] maintainers: claim for fm10k review Chen Jing D(Mark) @ 2015-02-18 0:13 ` Thomas Monjalon 16 siblings, 0 replies; 147+ messages in thread From: Thomas Monjalon @ 2015-02-18 0:13 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev 2015-02-17 22:18, Chen Jing D: > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > The patch set add poll mode driver for the host interface of Intel > Ethernet Switch FM10000 Series of silicons, which integrate NIC and > switch functionalities. The patch set include below features: > > 1. Basic RX/TX functions for PF/VF. > 2. Interrupt handling mechanism for PF/VF. > 3. per queue start/stop functions for PF/VF. > 4. Mailbox handling between PF/VF and PF/Switch Manager. > 5. Receive Side Scaling (RSS) for PF/VF. > 6. Scatter receive function for PF/VF. > 7. reta update/query for PF/VF. > 8. VLAN filter set for PF. > 9. Link status query for PF/VF. > > Change in v6: > - Merge ABI patch with fm10k driver regsiter patch. > - Fix typo. > - Rework comments. > - Minor adjustment on commit log. > - Increase error variable after mbuf allocation failed. > > Change in v5: > - Add sanity check for mbuf allocation. > - Add a new patch to claim fm10k driver review > - Change commit log. > - Add unlikely in func rx_desc_to_ol_flags to gain performance > - Add a new patch to add ABI version > > Change in v4: > - Change commit log to remove improper words. > > Changes in v3: > - Update base driver. > - Define several macros to pass base driver compile. > > Changes in v2: > - Merge 3 patches into 1 to configure fm10k compile environment. > - Rework on log code to follow style in ixgbe. > - Rework log message, remove redundant '\n' > - Update Copyright year from "2014" to "2015" > - Change base driver directory name from SHARED to base > - Add more description in log for patch "add PF and VF interrupt" > - Merge 2 patches into 1 to register fm10k driver > - Define macro to replace numeric for lower 32-bit mask. > > Chen Jing D(Mark) (1): > maintainers: claim for fm10k review > > Jeff Shaw (15): > fm10k: add base driver > eal: add fm10k device id > fm10k: register fm10k pmd PF driver > config: change config files to add fm10k into compile > fm10k: add reta update/requery functions > fm10k: add Rx queue setup/release function > fm10k: add Tx queue setup/release function > fm10k: add Rx/Tx single queue start/stop function > fm10k: add dev start/stop functions > fm10k: add receive and tranmit function > fm10k: add PF RSS support > fm10k: add scatter receive function > fm10k: add function to set vlan > fm10k: add SRIOV-VF support > fm10k: add PF and VF interrupt handling function > > MAINTAINERS | 4 + > config/common_bsdapp | 11 + > config/common_linuxapp | 11 + > lib/Makefile | 1 + > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > lib/librte_pmd_fm10k/Makefile | 100 + > lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ > lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + > lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ > lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + > lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ > lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ > lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ > lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ > lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + > lib/librte_pmd_fm10k/fm10k.h | 292 +++ > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1867 +++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 78 + > lib/librte_pmd_fm10k/fm10k_rxtx.c | 462 +++++ > lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + > mk/rte.app.mk | 4 + Pulled from next/dpdk-fm10k, thanks. ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 02/17] eal: add fm10k device id 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 03/17] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) ` (16 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 03/17] fm10k: register fm10k pmd PF driver 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 02/17] eal: add fm10k device id Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 04/17] Change config files to add fm10k into compile Chen Jing D(Mark) ` (15 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add init function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. 4. Add fm10k.h to define macros and basic data structure. 5. Add fm10k_logs.h to control log message output. 6. Add Makefile. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 95 ++++++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 343 +++++++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 ++++++++ 4 files changed, 740 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..1da84e9 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,95 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/base/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..1468040 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "base/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /**< don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..0b75299 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,343 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "base/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_INIT_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + + PMD_INIT_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_INIT_LOG(ERR, "Failed to register mailbox handler.err:%d", + err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_INIT_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..febd796 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_FM10K_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _FM10K_LOGS_H_ */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 04/17] Change config files to add fm10k into compile 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 03/17] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 05/17] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (14 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. 2. Change lib/Makefile to add fm10k driver into compile list. 3. Change mk/rte.app.mk to add fm10k lib into link. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 11 +++++++++++ config/common_linuxapp | 11 +++++++++++ lib/Makefile | 1 + mk/rte.app.mk | 4 ++++ 4 files changed, 27 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 57bacb8..8cfa4e6 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index d428f84..db8332d 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/lib/Makefile b/lib/Makefile index d617d81..561a696 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -44,6 +44,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 95dbb0b..d181eb1 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 05/17] fm10k: add reta update/requery functions 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 04/17] Change config files to add fm10k into compile Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) ` (13 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 162 +++++++++++++++++++++++++++++++++++ 1 files changed, 162 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0b75299..b3d4d79 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_INIT_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -119,6 +139,144 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -165,6 +323,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .link_update = fm10k_link_update, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 05/17] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 11:08 ` David Marchand 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 07/17] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) ` (12 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 255 +++++++++++++++++++++++++++++++++++ 1 files changed, 255 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b3d4d79..7280501 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -186,6 +227,218 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %"PRIu32", " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, (uint32_t)FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -325,6 +578,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-13 11:08 ` David Marchand 2015-02-17 13:01 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:08 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] +static int > +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, > + uint16_t nb_desc, unsigned int socket_id, > + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) > +{ > + struct fm10k_hw *hw = > FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + struct fm10k_rx_queue *q; > + const struct rte_memzone *mz; > + > + PMD_INIT_FUNC_TRACE(); > + > + /* make sure the mempool element size can account for alignment. > Use > + * RTE_LOG directly to make sure this error is seen. */ > Comment is not valid anymore since you call PMD_INIT_LOG. > + if (!mempool_element_size_valid(mp)) { > + PMD_INIT_LOG(ERR, "Error : Mempool element size is too > small"); > + return (-EINVAL); > + } > + > -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function 2015-02-13 11:08 ` David Marchand @ 2015-02-17 13:01 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-17 13:01 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi, From: David Marchand [mailto:david.marchand@6wind.com] Sent: Friday, February 13, 2015 7:08 PM To: Chen, Jing D Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; Neil Horman; Thomas Monjalon; Shaw, Jeffrey B Subject: Re: [PATCH v5 06/17] fm10k: add rx_queue_setup/release function Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ Comment is not valid anymore since you call PMD_INIT_LOG. [Mark] Thanks! I'll change the comments. + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 07/17] fm10k: add tx_queue_setup/release function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) ` (11 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 205 +++++++++++++++++++++++++++++++++++ 1 files changed, 205 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 7280501..989ecfd 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -439,6 +481,167 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %"PRIu32", " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, (uint32_t)FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -580,6 +783,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 07/17] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 11:31 ` David Marchand 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 09/17] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (10 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1) Address is 512B aligned 2) Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 59 +++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 222 +++++++++++++++++++++++++++++++++++ 2 files changed, 281 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 1468040..be990e5 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,63 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + /* use RTE_LOG directly to make sure this error is seen */ + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", __func__); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 989ecfd..06609be 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_INIT_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,118 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_INIT_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_INIT_LOG(ERR, "Invalid buffer address alignment :" + " %d", err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -781,6 +999,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-02-13 11:31 ` David Marchand 2015-02-13 16:45 ` Jeff Shaw 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:31 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] +/* > + * Verify Rx packet buffer alignment is valid. > + * > + * Hardware requires specific alignment for Rx packet buffers. At > + * least one of the following two conditions must be satisfied. > + * 1. Address is 512B aligned > + * 2. Address is 8B aligned and buffer does not cross 4K boundary. > + * > + * Return 1 if buffer alignment satisfies at least one condition, > + * otherwise return 0. > + * > + * Note: Alignment is checked by the driver when the Rx queue is reset. It > + * is assumed that if an entire descriptor ring can be filled with > + * buffers containing valid alignment, then all buffers in that > mempool > + * have valid address alignment. It is the responsibility of the > user > + * to ensure all buffers have valid alignment, as it is the user who > + * creates the mempool. > + * Note: It is assumed the buffer needs only to store a maximum size > Ethernet > + * frame. > + */ > +static inline int > +fm10k_addr_alignment_valid(struct rte_mbuf *mb) > +{ > + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); > + uint64_t boundary1, boundary2; > + > + /* 512B aligned? */ > + if (RTE_ALIGN(addr, 512) == addr) > + return 1; > + > + /* 8B aligned, and max Ethernet frame would not cross a 4KB > boundary? */ > + if (RTE_ALIGN(addr, 8) == addr) { > + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); > + boundary2 = RTE_ALIGN_FLOOR(addr + > ETHER_MAX_VLAN_FRAME_LEN, > + 4096); > + if (boundary1 == boundary2) > + return 1; > + } > + > + /* use RTE_LOG directly to make sure this error is seen */ > + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", > __func__); > + > + return 0; > +} > Same comment as before, do not directly use RTE_LOG. This is init stuff, you have a PMD_INIT_LOG macro. By the way, I need to dig deeper into this, but I can see multiple patches ensuring buffer alignment. Do we really need to validate this alignment here, if we already validated this constraint at the mempool level ? -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function 2015-02-13 11:31 ` David Marchand @ 2015-02-13 16:45 ` Jeff Shaw 0 siblings, 0 replies; 147+ messages in thread From: Jeff Shaw @ 2015-02-13 16:45 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi David, thanks for the review. On Fri, Feb 13, 2015 at 12:31:16PM +0100, David Marchand wrote: > Hello, > > On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> > wrote: > > [snip] > > +/* > > + * Verify Rx packet buffer alignment is valid. > > + * > > + * Hardware requires specific alignment for Rx packet buffers. At > > + * least one of the following two conditions must be satisfied. > > + * 1. Address is 512B aligned > > + * 2. Address is 8B aligned and buffer does not cross 4K boundary. > > + * > > + * Return 1 if buffer alignment satisfies at least one condition, > > + * otherwise return 0. > > + * > > + * Note: Alignment is checked by the driver when the Rx queue is reset. It > > + * is assumed that if an entire descriptor ring can be filled with > > + * buffers containing valid alignment, then all buffers in that > > mempool > > + * have valid address alignment. It is the responsibility of the > > user > > + * to ensure all buffers have valid alignment, as it is the user who > > + * creates the mempool. > > + * Note: It is assumed the buffer needs only to store a maximum size > > Ethernet > > + * frame. > > + */ > > +static inline int > > +fm10k_addr_alignment_valid(struct rte_mbuf *mb) > > +{ > > + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); > > + uint64_t boundary1, boundary2; > > + > > + /* 512B aligned? */ > > + if (RTE_ALIGN(addr, 512) == addr) > > + return 1; > > + > > + /* 8B aligned, and max Ethernet frame would not cross a 4KB > > boundary? */ > > + if (RTE_ALIGN(addr, 8) == addr) { > > + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); > > + boundary2 = RTE_ALIGN_FLOOR(addr + > > ETHER_MAX_VLAN_FRAME_LEN, > > + 4096); > > + if (boundary1 == boundary2) > > + return 1; > > + } > > + > > + /* use RTE_LOG directly to make sure this error is seen */ > > + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", > > __func__); > > + > > + return 0; > > +} > > > > Same comment as before, do not directly use RTE_LOG. > This is init stuff, you have a PMD_INIT_LOG macro. Agreed, the comment should be fixed. > > By the way, I need to dig deeper into this, but I can see multiple patches > ensuring buffer alignment. > Do we really need to validate this alignment here, if we already validated > this constraint at the mempool level ? > This is really a sanity check. The buffer alignment needs to be checked at runtime becuase a user could modify the alignment. We provide a check here to be extra safe, and hopefully to fail at init time rather than later. There are two ways to satisfy the alignment requirements for the hardware. Currently the driver implements the 512B alignment, but it is possible somebody may want to the other 8B alignment w/o crossing a 4K page boundary. This sanity check would help catch any possible issues in the future related to buffer alignment. -Jeff ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 09/17] fm10k: add dev start/stop functions 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (9 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize RX queues. 2. Add function to initialize TX queues. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. 4. Add function to close mailbox service. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 224 +++++++++++++++++++++++++++++++++++ 1 files changed, 224 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 06609be..6a4b16a 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,11 +44,14 @@ #define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +#define UINT64_LOWER_32BITS_MASK 0x00000000ffffffffULL /* Number of chars per uint32 type */ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +271,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -379,6 +474,125 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_INIT_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -993,8 +1207,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 09/17] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 11:42 ` David Marchand 2015-02-13 11:53 ` David Marchand 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 11/17] fm10k: add PF RSS support Chen Jing D(Mark) ` (8 subsequent siblings) 18 siblings, 2 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. 3. Change Makefile to compile new file fm10k_rxtx.c Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 1 + lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 316 +++++++++++++++++++++++++++++++++++ 4 files changed, 326 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index 1da84e9..8ab788c 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -79,6 +79,7 @@ VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index be990e5..a9b19cd 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -280,4 +280,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 6a4b16a..e04d200 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1245,6 +1245,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_INIT_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..9cead3a --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,316 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "base/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| TIME TAG |"); + PMD_RX_LOG(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if (unlikely((d->d.staterr & + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE))) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if (unlikely((d->d.staterr & + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E))) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (unlikely(d->d.staterr & FM10K_RXD_STATUS_HBO)) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE)) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + int ret; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + ret = rte_mempool_get_bulk(q->mp, + (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + + if (unlikely(ret != 0)) { + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); + /* + * Need to restore next_dd if we cannot allocate new + * buffers to replenish the old ones. + */ + q->next_dd = (q->next_dd + q->nb_desc - count) % + q->nb_desc; + + return 0; + } + + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (likely(count > 0)) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-13 11:42 ` David Marchand 2015-02-17 13:07 ` Chen, Jing D 2015-02-13 11:53 ` David Marchand 1 sibling, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:42 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] + > + /* set checksum flags on first descriptor of packet. SCTP checksum > + * offload is not supported, but we do not explicitely check for > this > + * case in favor of greatly simplified processing. */ > Checkpatch is complaining : WARNING: 'explicitely' may be misspelled - perhaps 'explicitly'? #328: FILE: lib/librte_pmd_fm10k/fm10k_rxtx.c:261: + * offload is not supported, but we do not explicitely check for this -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function 2015-02-13 11:42 ` David Marchand @ 2015-02-17 13:07 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-17 13:07 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi, From: David Marchand [mailto:david.marchand@6wind.com] Sent: Friday, February 13, 2015 7:43 PM To: Chen, Jing D Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; Neil Horman; Thomas Monjalon; Shaw, Jeffrey B Subject: Re: [PATCH v5 10/17] fm10k: add receive and tranmit function Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ Checkpatch is complaining : WARNING: 'explicitely' may be misspelled - perhaps 'explicitly'? [Mark] Thanks, I'll change it. #328: FILE: lib/librte_pmd_fm10k/fm10k_rxtx.c:261: + * offload is not supported, but we do not explicitely check for this -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-13 11:42 ` David Marchand @ 2015-02-13 11:53 ` David Marchand 2015-02-17 13:10 ` Chen, Jing D 1 sibling, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:53 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > + ret = rte_mempool_get_bulk(q->mp, > + (void > **)&q->sw_ring[q->next_alloc], > + q->alloc_thresh); > + > + if (unlikely(ret != 0)) { > + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); > rx_mbuf_alloc_failed++ ? -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function 2015-02-13 11:53 ` David Marchand @ 2015-02-17 13:10 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-17 13:10 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi, From: David Marchand [mailto:david.marchand@6wind.com] Sent: Friday, February 13, 2015 7:54 PM To: Chen, Jing D Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; Neil Horman; Thomas Monjalon; Shaw, Jeffrey B Subject: Re: [PATCH v5 10/17] fm10k: add receive and tranmit function On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com<mailto:jing.d.chen@intel.com>> wrote: [snip] + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + ret = rte_mempool_get_bulk(q->mp, + (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + + if (unlikely(ret != 0)) { + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); rx_mbuf_alloc_failed++ ? [Mark] Thanks, I’ll change that. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 11/17] fm10k: add PF RSS support 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function Chen Jing D(Mark) ` (7 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index e04d200..5858504 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -270,6 +270,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not" + "supported", hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -359,6 +431,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1165,6 +1239,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1233,6 +1387,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 11/17] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 11:55 ` David Marchand 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 13/17] fm10k: add function to set vlan Chen Jing D(Mark) ` (6 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 143 +++++++++++++++++++++++++++++++++++ 3 files changed, 161 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index a9b19cd..8741936 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -285,6 +285,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 5858504..ead1585 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -423,6 +423,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -431,6 +438,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1404,6 +1416,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 9cead3a..2a2fa84 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -194,6 +194,149 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + int ret; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + ret = rte_mempool_get_bulk(q->mp, + (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + + if (unlikely(ret != 0)) { + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); + /* + * Need to restore next_dd if we cannot allocate new + * buffers to replenish the old ones. + */ + q->next_dd = (q->next_dd + q->nb_desc - count) % + q->nb_desc; + return 0; + } + + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-13 11:55 ` David Marchand 2015-02-17 13:11 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:55 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] + if (unlikely(ret != 0)) { > + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); > Idem mono segment. rx_mbuf_alloc_failed++ ? -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function 2015-02-13 11:55 ` David Marchand @ 2015-02-17 13:11 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-17 13:11 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi, From: David Marchand [mailto:david.marchand@6wind.com] Sent: Friday, February 13, 2015 7:55 PM To: Chen, Jing D Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; Neil Horman; Thomas Monjalon; Shaw, Jeffrey B Subject: Re: [PATCH v5 12/17] fm10k: Add scatter receive function Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com<mailto:jing.d.chen@intel.com>> wrote: [snip] + if (unlikely(ret != 0)) { + PMD_RX_LOG(ERR, "Failed to alloc mbuf"); Idem mono segment. rx_mbuf_alloc_failed++ ? [Mark] Thanks, I’ll change that. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 13/17] fm10k: add function to set vlan 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 14/17] fm10k: Add SRIOV-VF support Chen Jing D(Mark) ` (5 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index ead1585..e28a94c 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -787,6 +787,20 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1389,6 +1403,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 14/17] fm10k: Add SRIOV-VF support 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 13/17] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) ` (4 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translate the request to correct units. For some functionalities that are unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index e28a94c..c3b4914 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1563,6 +1563,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 14/17] fm10k: Add SRIOV-VF support Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 11:42 ` David Marchand 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 16/17] maintainers: claim for fm10k review Chen Jing D(Mark) ` (3 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add functions to enable PF/VF interrupt. 2. Add function to process error message passed from interrupt. 2. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index c3b4914..0a095c8 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1345,6 +1345,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 availiable, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_INIT_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_INIT_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_INIT_LOG(INFO, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1525,6 +1775,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1554,6 +1819,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-02-13 11:42 ` David Marchand 2015-02-17 13:12 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-02-13 11:42 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: [snip] + > + /* Only INT 0 availiable, other 15 are reserved. */ > available. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function 2015-02-13 11:42 ` David Marchand @ 2015-02-17 13:12 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-17 13:12 UTC (permalink / raw) To: David Marchand; +Cc: dev Hi, From: David Marchand [mailto:david.marchand@6wind.com] Sent: Friday, February 13, 2015 7:42 PM To: Chen, Jing D Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; Neil Horman; Thomas Monjalon; Shaw, Jeffrey B Subject: Re: [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function Hello, On Fri, Feb 13, 2015 at 9:19 AM, Chen Jing D(Mark) <jing.d.chen@intel.com<mailto:jing.d.chen@intel.com>> wrote: [snip] + + /* Only INT 0 availiable, other 15 are reserved. */ available. [Mark] Thanks, I’ll change it. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 16/17] maintainers: claim for fm10k review 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (14 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 17/17] fm10k: Add ABI version of librte_pmd_fm10k Chen Jing D(Mark) ` (2 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> Claim for fm10k polling mode driver review. Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- MAINTAINERS | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index a771fa3..e7a425b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -213,6 +213,10 @@ Intel i40e M: Helin Zhang <helin.zhang@intel.com> F: lib/librte_pmd_i40e/ +Intel fm10k +M: Jing Chen <jing.d.chen@intel.com> +F: lib/librte_pmd_fm10k/ + RedHat virtio M: Changchun Ouyang <changchun.ouyang@intel.com> F: lib/librte_pmd_virtio/ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 17/17] fm10k: Add ABI version of librte_pmd_fm10k 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (15 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 16/17] maintainers: claim for fm10k review Chen Jing D(Mark) @ 2015-02-13 8:19 ` Chen Jing D(Mark) 2015-02-13 8:37 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Zhang, Helin 2015-02-15 5:07 ` Qiu, Michael 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-13 8:19 UTC (permalink / raw) To: dev From: Michael Qiu <michael.qiu@intel.com> ABI version must be specified, set to version 1 for DPDK 2.0 Signed-off-by: Michael Qiu <michael.qiu@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 4 ++++ lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 ++++ 2 files changed, 8 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index 8ab788c..986f4ef 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -39,6 +39,10 @@ LIB = librte_pmd_fm10k.a CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) +EXPORT_MAP := rte_pmd_fm10k_version.map + +LIBABIVER := 1 + ifeq ($(CC), icc) # # CFLAGS for icc diff --git a/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map b/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map new file mode 100644 index 0000000..ef35398 --- /dev/null +++ b/lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map @@ -0,0 +1,4 @@ +DPDK_2.0 { + + local: *; +}; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (16 preceding siblings ...) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 17/17] fm10k: Add ABI version of librte_pmd_fm10k Chen Jing D(Mark) @ 2015-02-13 8:37 ` Zhang, Helin 2015-02-15 5:07 ` Qiu, Michael 18 siblings, 0 replies; 147+ messages in thread From: Zhang, Helin @ 2015-02-13 8:37 UTC (permalink / raw) To: Chen, Jing D, dev > -----Original Message----- > From: Chen, Jing D > Sent: Friday, February 13, 2015 4:20 PM > To: dev@dpdk.org > Cc: Zhang, Helin; Qiu, Michael; nhorman@tuxdriver.com; > thomas.monjalon@6wind.com; david.marchand@6wind.com; Chen, Jing D > Subject: [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver > > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > The patch set add poll mode driver for the host interface of Intel Ethernet > Switch FM10000 Series of silicons, which integrate NIC and switch > functionalities. The patch set include below features: > > 1. Basic RX/TX functions for PF/VF. > 2. Interrupt handling mechanism for PF/VF. > 3. per queue start/stop functions for PF/VF. > 4. Mailbox handling between PF/VF and PF/Switch Manager. > 5. Receive Side Scaling (RSS) for PF/VF. > 6. Scatter receive function for PF/VF. > 7. reta update/query for PF/VF. > 8. VLAN filter set for PF. > 9. Link status query for PF/VF. > > Change in v5: > - Add sanity check for mbuf allocation. > - Add a new patch to claim fm10k driver review > - Change commit log. > - Add unlikely in func rx_desc_to_ol_flags to gain performance > - Add a new patch to add ABI version > > Change in v4: > - Change commit log to remove improper words. > > Changes in v3: > - Update base driver. > - Define several macros to pass base driver compile. > > Changes in v2: > - Merge 3 patches into 1 to configure fm10k compile environment. > - Rework on log code to follow style in ixgbe. > - Rework log message, remove redundant '\n' > - Update Copyright year from "2014" to "2015" > - Change base driver directory name from SHARED to base > - Add more description in log for patch "add PF and VF interrupt" > - Merge 2 patches into 1 to register fm10k driver > - Define macro to replace numeric for lower 32-bit mask. > > Chen Jing D(Mark) (1): > maintainers: claim for fm10k review > > Jeff Shaw (15): > fm10k: add base driver > eal: add fm10k device id > fm10k: register fm10k pmd PF driver > Change config files to add fm10k into compile > fm10k: add reta update/requery functions > fm10k: add rx_queue_setup/release function > fm10k: add tx_queue_setup/release function > fm10k: add RX/TX single queue start/stop function > fm10k: add dev start/stop functions > fm10k: add receive and tranmit function > fm10k: add PF RSS support > fm10k: Add scatter receive function > fm10k: add function to set vlan > fm10k: Add SRIOV-VF support > fm10k: add PF and VF interrupt handling function > > Michael Qiu (1): > fm10k: Add ABI version of librte_pmd_fm10k > > MAINTAINERS | 4 + > config/common_bsdapp | 11 + > config/common_linuxapp | 11 + > lib/Makefile | 1 + > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > lib/librte_pmd_fm10k/Makefile | 100 + > lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ > lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + > lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ > lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + > lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 > +++++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ > lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ > lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 > +++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ > lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ > lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + > lib/librte_pmd_fm10k/fm10k.h | 293 +++ > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 > +++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 78 + > lib/librte_pmd_fm10k/fm10k_rxtx.c | 459 +++++ > lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + > mk/rte.app.mk | 4 + > 26 files changed, 11472 insertions(+), 0 deletions(-) create mode 100644 > lib/librte_pmd_fm10k/Makefile create mode 100644 > lib/librte_pmd_fm10k/base/fm10k_api.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h > create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 > lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode > 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c create mode 100644 > lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map > > -- > 1.7.7.6 Acked-by: Helin Zhang <helin.zhang@intel.com> ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (17 preceding siblings ...) 2015-02-13 8:37 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Zhang, Helin @ 2015-02-15 5:07 ` Qiu, Michael 18 siblings, 0 replies; 147+ messages in thread From: Qiu, Michael @ 2015-02-15 5:07 UTC (permalink / raw) To: Chen, Jing D, dev On 2/13/2015 4:20 PM, Chen, Jing D wrote: > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > The patch set add poll mode driver for the host interface of Intel > Ethernet Switch FM10000 Series of silicons, which integrate NIC and > switch functionalities. The patch set include below features: > > 1. Basic RX/TX functions for PF/VF. > 2. Interrupt handling mechanism for PF/VF. > 3. per queue start/stop functions for PF/VF. > 4. Mailbox handling between PF/VF and PF/Switch Manager. > 5. Receive Side Scaling (RSS) for PF/VF. > 6. Scatter receive function for PF/VF. > 7. reta update/query for PF/VF. > 8. VLAN filter set for PF. > 9. Link status query for PF/VF. > > Change in v5: > - Add sanity check for mbuf allocation. > - Add a new patch to claim fm10k driver review > - Change commit log. > - Add unlikely in func rx_desc_to_ol_flags to gain performance > - Add a new patch to add ABI version > > Change in v4: > - Change commit log to remove improper words. > > Changes in v3: > - Update base driver. > - Define several macros to pass base driver compile. > > Changes in v2: > - Merge 3 patches into 1 to configure fm10k compile environment. > - Rework on log code to follow style in ixgbe. > - Rework log message, remove redundant '\n' > - Update Copyright year from "2014" to "2015" > - Change base driver directory name from SHARED to base > - Add more description in log for patch "add PF and VF interrupt" > - Merge 2 patches into 1 to register fm10k driver > - Define macro to replace numeric for lower 32-bit mask. > > Chen Jing D(Mark) (1): > maintainers: claim for fm10k review > > Jeff Shaw (15): > fm10k: add base driver > eal: add fm10k device id > fm10k: register fm10k pmd PF driver > Change config files to add fm10k into compile > fm10k: add reta update/requery functions > fm10k: add rx_queue_setup/release function > fm10k: add tx_queue_setup/release function > fm10k: add RX/TX single queue start/stop function > fm10k: add dev start/stop functions > fm10k: add receive and tranmit function > fm10k: add PF RSS support > fm10k: Add scatter receive function > fm10k: add function to set vlan > fm10k: Add SRIOV-VF support > fm10k: add PF and VF interrupt handling function > > Michael Qiu (1): > fm10k: Add ABI version of librte_pmd_fm10k > > MAINTAINERS | 4 + > config/common_bsdapp | 11 + > config/common_linuxapp | 11 + > lib/Makefile | 1 + > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > lib/librte_pmd_fm10k/Makefile | 100 + > lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ > lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + > lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ > lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + > lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ > lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ > lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ > lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ > lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + > lib/librte_pmd_fm10k/fm10k.h | 293 +++ > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 78 + > lib/librte_pmd_fm10k/fm10k_rxtx.c | 459 +++++ > lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map | 4 + > mk/rte.app.mk | 4 + > 26 files changed, 11472 insertions(+), 0 deletions(-) > create mode 100644 lib/librte_pmd_fm10k/Makefile > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > create mode 100644 lib/librte_pmd_fm10k/rte_pmd_fm10k_version.map > Acked-by: Michael Qiu <michael.qiu@intel.com> ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 02/15] eal: add fm10k device id 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) ` (13 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 03/15] fm10k: register fm10k pmd PF driver 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 02/15] eal: add fm10k device id Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) ` (12 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add init function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. 4. Add fm10k.h to define macros and basic data structure. 5. Add fm10k_logs.h to control log message output. 6. Add Makefile. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 95 ++++++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 343 +++++++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 ++++++++ 4 files changed, 740 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..1da84e9 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,95 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/base/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..1468040 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "base/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /**< don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..0b75299 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,343 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "base/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_INIT_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + + PMD_INIT_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_INIT_LOG(ERR, "Failed to register mailbox handler.err:%d", + err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_INIT_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..febd796 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_FM10K_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _FM10K_LOGS_H_ */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 04/15] Change config files to add fm10k into compile 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (11 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. 2. Change lib/Makefile to add fm10k driver into compile list. 3. Change mk/rte.app.mk to add fm10k lib into link. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 11 +++++++++++ config/common_linuxapp | 11 +++++++++++ lib/Makefile | 1 + mk/rte.app.mk | 4 ++++ 4 files changed, 27 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 9177db1..77e8a64 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index 2f9643b..b076b5d 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/lib/Makefile b/lib/Makefile index 0ffc982..b1f3860 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -43,6 +43,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 4294d9a..87d8763 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 05/15] fm10k: add reta update/requery functions 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) ` (10 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 162 +++++++++++++++++++++++++++++++++++ 1 files changed, 162 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0b75299..b3d4d79 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_INIT_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -119,6 +139,144 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -165,6 +323,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .link_update = fm10k_link_update, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 06/15] fm10k: add rx_queue_setup/release function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) ` (9 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 255 +++++++++++++++++++++++++++++++++++ 1 files changed, 255 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b3d4d79..63e8a65 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -186,6 +227,218 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -325,6 +578,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 07/15] fm10k: add tx_queue_setup/release function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) ` (8 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 205 +++++++++++++++++++++++++++++++++++ 1 files changed, 205 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 63e8a65..caaccf6 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -439,6 +481,167 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -580,6 +783,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 08/15] fm10k: add RX/TX single queue start/stop function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (7 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1) Address is 512B aligned 2) Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 59 +++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 222 +++++++++++++++++++++++++++++++++++ 2 files changed, 281 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 1468040..be990e5 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,63 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + /* use RTE_LOG directly to make sure this error is seen */ + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", __func__); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index caaccf6..4a893e1 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_INIT_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,118 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_INIT_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_INIT_LOG(ERR, "Invalid buffer address alignment :" + " %d", err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -781,6 +999,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 09/15] fm10k: add dev start/stop functions 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (6 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize RX queues. 2. Add function to initialize TX queues. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. 4. Add function to close mailbox service. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 224 +++++++++++++++++++++++++++++++++++ 1 files changed, 224 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 4a893e1..011b0bc 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,11 +44,14 @@ #define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +#define UINT64_LOWER_32BITS_MASK 0x00000000ffffffffULL /* Number of chars per uint32 type */ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +271,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -379,6 +474,125 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_INIT_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -993,8 +1207,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 17:28 ` Jeff Shaw 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 11/15] fm10k: add PF RSS support Chen Jing D(Mark) ` (5 subsequent siblings) 15 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. 3. Change Makefile to compile new file fm10k_rxtx.c Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 1 + lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 299 +++++++++++++++++++++++++++++++++++ 4 files changed, 309 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index 1da84e9..8ab788c 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -79,6 +79,7 @@ VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index be990e5..a9b19cd 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -280,4 +280,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 011b0bc..af68ad7 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1245,6 +1245,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_INIT_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..1d95bff --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,299 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "base/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| TIME TAG |"); + PMD_RX_LOG(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if ((d->d.staterr & (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if ((d->d.staterr & (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (d->d.staterr & FM10K_RXD_STATUS_HBO) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (d->d.staterr & FM10K_RXD_STATUS_RXE) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (count > 0) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-11 17:28 ` Jeff Shaw 2015-02-12 4:04 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Jeff Shaw @ 2015-02-11 17:28 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Wed, Feb 11, 2015 at 09:31:33AM +0800, Chen Jing D(Mark) wrote: > +uint16_t > +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + struct rte_mbuf *mbuf; > + union fm10k_rx_desc desc; > + struct fm10k_rx_queue *q = rx_queue; > + uint16_t count = 0; > + int alloc = 0; > + uint16_t next_dd; > + > + next_dd = q->next_dd; > + > + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); > + for (count = 0; count < nb_pkts; ++count) { > + mbuf = q->sw_ring[next_dd]; > + desc = q->hw_ring[next_dd]; > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > + break; > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > + dump_rxd(&desc); > +#endif > + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > + > + mbuf->ol_flags = 0; > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > + rx_desc_to_ol_flags(mbuf, &desc); > +#endif > + > + mbuf->hash.rss = desc.d.rss; > + > + rx_pkts[count] = mbuf; > + if (++next_dd == q->nb_desc) { > + next_dd = 0; > + alloc = 1; > + } > + > + /* Prefetch next mbuf while processing current one. */ > + rte_prefetch0(q->sw_ring[next_dd]); > + > + /* > + * When next RX descriptor is on a cache-line boundary, > + * prefetch the next 4 RX descriptors and the next 8 pointers > + * to mbufs. > + */ > + if ((next_dd & 0x3) == 0) { > + rte_prefetch0(&q->hw_ring[next_dd]); > + rte_prefetch0(&q->sw_ring[next_dd]); > + } > + } > + > + q->next_dd = next_dd; > + > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], > + q->alloc_thresh); The return value should be checked here in case the mempool runs out of buffers. Thanks Helin for spotting this. I'm not sure how I missed it originally. > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > + mbuf = q->sw_ring[q->next_alloc]; > + > + /* setup static mbuf fields */ > + fm10k_pktmbuf_reset(mbuf, q->port_id); > + > + /* write descriptor */ > + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + q->hw_ring[q->next_alloc] = desc; > + } > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > + q->next_trigger += q->alloc_thresh; > + if (q->next_trigger >= q->nb_desc) { > + q->next_trigger = q->alloc_thresh - 1; > + q->next_alloc = 0; > + } > + } > + > + return count; > +} > + Thanks, Jeff ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function 2015-02-11 17:28 ` Jeff Shaw @ 2015-02-12 4:04 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-12 4:04 UTC (permalink / raw) To: Shaw, Jeffrey B; +Cc: dev > -----Original Message----- > From: Shaw, Jeffrey B > Sent: Thursday, February 12, 2015 1:29 AM > To: Chen, Jing D > Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; nhorman@tuxdriver.com; > thomas.monjalon@6wind.com; david.marchand@6wind.com > Subject: Re: [PATCH v4 10/15] fm10k: add receive and tranmit function > > On Wed, Feb 11, 2015 at 09:31:33AM +0800, Chen Jing D(Mark) wrote: > > > +uint16_t > > +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > > + uint16_t nb_pkts) > > +{ > > + struct rte_mbuf *mbuf; > > + union fm10k_rx_desc desc; > > + struct fm10k_rx_queue *q = rx_queue; > > + uint16_t count = 0; > > + int alloc = 0; > > + uint16_t next_dd; > > + > > + next_dd = q->next_dd; > > + > > + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); > > + for (count = 0; count < nb_pkts; ++count) { > > + mbuf = q->sw_ring[next_dd]; > > + desc = q->hw_ring[next_dd]; > > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > > + break; > > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > > + dump_rxd(&desc); > > +#endif > > + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; > > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > > + > > + mbuf->ol_flags = 0; > > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > > + rx_desc_to_ol_flags(mbuf, &desc); > > +#endif > > + > > + mbuf->hash.rss = desc.d.rss; > > + > > + rx_pkts[count] = mbuf; > > + if (++next_dd == q->nb_desc) { > > + next_dd = 0; > > + alloc = 1; > > + } > > + > > + /* Prefetch next mbuf while processing current one. */ > > + rte_prefetch0(q->sw_ring[next_dd]); > > + > > + /* > > + * When next RX descriptor is on a cache-line boundary, > > + * prefetch the next 4 RX descriptors and the next 8 pointers > > + * to mbufs. > > + */ > > + if ((next_dd & 0x3) == 0) { > > + rte_prefetch0(&q->hw_ring[next_dd]); > > + rte_prefetch0(&q->sw_ring[next_dd]); > > + } > > + } > > + > > + q->next_dd = next_dd; > > + > > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q- > >next_alloc], > > + q->alloc_thresh); > > The return value should be checked here in case the mempool runs out > of buffers. Thanks Helin for spotting this. I'm not sure how I missed it > originally. Thanks! I'll change accordingly. > > > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > > + mbuf = q->sw_ring[q->next_alloc]; > > + > > + /* setup static mbuf fields */ > > + fm10k_pktmbuf_reset(mbuf, q->port_id); > > + > > + /* write descriptor */ > > + desc.q.pkt_addr = > MBUF_DMA_ADDR_DEFAULT(mbuf); > > + desc.q.hdr_addr = > MBUF_DMA_ADDR_DEFAULT(mbuf); > > + q->hw_ring[q->next_alloc] = desc; > > + } > > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > > + q->next_trigger += q->alloc_thresh; > > + if (q->next_trigger >= q->nb_desc) { > > + q->next_trigger = q->alloc_thresh - 1; > > + q->next_alloc = 0; > > + } > > + } > > + > > + return count; > > +} > > + > > Thanks, > Jeff ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 11/15] fm10k: add PF RSS support 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) ` (4 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index af68ad7..d434b02 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -270,6 +270,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not" + "supported", hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -359,6 +431,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1165,6 +1239,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1233,6 +1387,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 11/15] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 17:32 ` Jeff Shaw 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 13/15] fm10k: add function to set vlan Chen Jing D(Mark) ` (3 subsequent siblings) 15 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index a9b19cd..8741936 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -285,6 +285,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index d434b02..a2fcd63 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -423,6 +423,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -431,6 +438,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1404,6 +1416,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 1d95bff..7591a99 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -177,6 +177,134 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-11 17:32 ` Jeff Shaw 2015-02-12 4:04 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Jeff Shaw @ 2015-02-11 17:32 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Wed, Feb 11, 2015 at 09:31:35AM +0800, Chen Jing D(Mark) wrote: > > +uint16_t > +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + struct rte_mbuf *mbuf; > + union fm10k_rx_desc desc; > + struct fm10k_rx_queue *q = rx_queue; > + uint16_t count = 0; > + uint16_t nb_rcv, nb_seg; > + int alloc = 0; > + uint16_t next_dd; > + struct rte_mbuf *first_seg = q->pkt_first_seg; > + struct rte_mbuf *last_seg = q->pkt_last_seg; > + > + next_dd = q->next_dd; > + nb_rcv = 0; > + > + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); > + for (count = 0; count < nb_seg; count++) { > + mbuf = q->sw_ring[next_dd]; > + desc = q->hw_ring[next_dd]; > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > + break; > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > + dump_rxd(&desc); > +#endif > + > + if (++next_dd == q->nb_desc) { > + next_dd = 0; > + alloc = 1; > + } > + > + /* Prefetch next mbuf while processing current one. */ > + rte_prefetch0(q->sw_ring[next_dd]); > + > + /* > + * When next RX descriptor is on a cache-line boundary, > + * prefetch the next 4 RX descriptors and the next 8 pointers > + * to mbufs. > + */ > + if ((next_dd & 0x3) == 0) { > + rte_prefetch0(&q->hw_ring[next_dd]); > + rte_prefetch0(&q->sw_ring[next_dd]); > + } > + > + /* Fill data length */ > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > + > + /* > + * If this is the first buffer of the received packet, > + * set the pointer to the first mbuf of the packet and > + * initialize its context. > + * Otherwise, update the total length and the number of segments > + * of the current scattered packet, and update the pointer to > + * the last mbuf of the current packet. > + */ > + if (!first_seg) { > + first_seg = mbuf; > + first_seg->pkt_len = desc.w.length; > + } else { > + first_seg->pkt_len = > + (uint16_t)(first_seg->pkt_len + > + rte_pktmbuf_data_len(mbuf)); > + first_seg->nb_segs++; > + last_seg->next = mbuf; > + } > + > + /* > + * If this is not the last buffer of the received packet, > + * update the pointer to the last mbuf of the current scattered > + * packet and continue to parse the RX ring. > + */ > + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { > + last_seg = mbuf; > + continue; > + } > + > + first_seg->ol_flags = 0; > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > + rx_desc_to_ol_flags(first_seg, &desc); > +#endif > + first_seg->hash.rss = desc.d.rss; > + > + /* Prefetch data of first segment, if configured to do so. */ > + rte_packet_prefetch((char *)first_seg->buf_addr + > + first_seg->data_off); > + > + /* > + * Store the mbuf address into the next entry of the array > + * of returned packets. > + */ > + rx_pkts[nb_rcv++] = first_seg; > + > + /* > + * Setup receipt context for a new packet. > + */ > + first_seg = NULL; > + } > + > + q->next_dd = next_dd; > + q->pkt_first_seg = first_seg; > + q->pkt_last_seg = last_seg; > + > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], > + q->alloc_thresh); Same thing here. The return value should be checked in case the mempool runs out of buffers. > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > + mbuf = q->sw_ring[q->next_alloc]; > + > + /* setup static mbuf fields */ > + fm10k_pktmbuf_reset(mbuf, q->port_id); > + > + /* write descriptor */ > + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + q->hw_ring[q->next_alloc] = desc; > + } > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > + q->next_trigger += q->alloc_thresh; > + if (q->next_trigger >= q->nb_desc) { > + q->next_trigger = q->alloc_thresh - 1; > + q->next_alloc = 0; > + } > + } > + > + return nb_rcv; > +} > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function 2015-02-11 17:32 ` Jeff Shaw @ 2015-02-12 4:04 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-12 4:04 UTC (permalink / raw) To: Shaw, Jeffrey B; +Cc: dev > -----Original Message----- > From: Shaw, Jeffrey B > Sent: Thursday, February 12, 2015 1:33 AM > To: Chen, Jing D > Cc: dev@dpdk.org; Zhang, Helin; Qiu, Michael; nhorman@tuxdriver.com; > thomas.monjalon@6wind.com; david.marchand@6wind.com > Subject: Re: [PATCH v4 12/15] fm10k: Add scatter receive function > > On Wed, Feb 11, 2015 at 09:31:35AM +0800, Chen Jing D(Mark) wrote: > > > > +uint16_t > > +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > > + uint16_t nb_pkts) > > +{ > > + struct rte_mbuf *mbuf; > > + union fm10k_rx_desc desc; > > + struct fm10k_rx_queue *q = rx_queue; > > + uint16_t count = 0; > > + uint16_t nb_rcv, nb_seg; > > + int alloc = 0; > > + uint16_t next_dd; > > + struct rte_mbuf *first_seg = q->pkt_first_seg; > > + struct rte_mbuf *last_seg = q->pkt_last_seg; > > + > > + next_dd = q->next_dd; > > + nb_rcv = 0; > > + > > + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); > > + for (count = 0; count < nb_seg; count++) { > > + mbuf = q->sw_ring[next_dd]; > > + desc = q->hw_ring[next_dd]; > > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > > + break; > > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > > + dump_rxd(&desc); > > +#endif > > + > > + if (++next_dd == q->nb_desc) { > > + next_dd = 0; > > + alloc = 1; > > + } > > + > > + /* Prefetch next mbuf while processing current one. */ > > + rte_prefetch0(q->sw_ring[next_dd]); > > + > > + /* > > + * When next RX descriptor is on a cache-line boundary, > > + * prefetch the next 4 RX descriptors and the next 8 pointers > > + * to mbufs. > > + */ > > + if ((next_dd & 0x3) == 0) { > > + rte_prefetch0(&q->hw_ring[next_dd]); > > + rte_prefetch0(&q->sw_ring[next_dd]); > > + } > > + > > + /* Fill data length */ > > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > > + > > + /* > > + * If this is the first buffer of the received packet, > > + * set the pointer to the first mbuf of the packet and > > + * initialize its context. > > + * Otherwise, update the total length and the number of > segments > > + * of the current scattered packet, and update the pointer to > > + * the last mbuf of the current packet. > > + */ > > + if (!first_seg) { > > + first_seg = mbuf; > > + first_seg->pkt_len = desc.w.length; > > + } else { > > + first_seg->pkt_len = > > + (uint16_t)(first_seg->pkt_len + > > + rte_pktmbuf_data_len(mbuf)); > > + first_seg->nb_segs++; > > + last_seg->next = mbuf; > > + } > > + > > + /* > > + * If this is not the last buffer of the received packet, > > + * update the pointer to the last mbuf of the current > scattered > > + * packet and continue to parse the RX ring. > > + */ > > + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { > > + last_seg = mbuf; > > + continue; > > + } > > + > > + first_seg->ol_flags = 0; > > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > > + rx_desc_to_ol_flags(first_seg, &desc); > > +#endif > > + first_seg->hash.rss = desc.d.rss; > > + > > + /* Prefetch data of first segment, if configured to do so. */ > > + rte_packet_prefetch((char *)first_seg->buf_addr + > > + first_seg->data_off); > > + > > + /* > > + * Store the mbuf address into the next entry of the array > > + * of returned packets. > > + */ > > + rx_pkts[nb_rcv++] = first_seg; > > + > > + /* > > + * Setup receipt context for a new packet. > > + */ > > + first_seg = NULL; > > + } > > + > > + q->next_dd = next_dd; > > + q->pkt_first_seg = first_seg; > > + q->pkt_last_seg = last_seg; > > + > > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q- > >next_alloc], > > + q->alloc_thresh); > > Same thing here. The return value should be checked in case the > mempool runs out of buffers. Thanks! I'll change accordingly. > > > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > > + mbuf = q->sw_ring[q->next_alloc]; > > + > > + /* setup static mbuf fields */ > > + fm10k_pktmbuf_reset(mbuf, q->port_id); > > + > > + /* write descriptor */ > > + desc.q.pkt_addr = > MBUF_DMA_ADDR_DEFAULT(mbuf); > > + desc.q.hdr_addr = > MBUF_DMA_ADDR_DEFAULT(mbuf); > > + q->hw_ring[q->next_alloc] = desc; > > + } > > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > > + q->next_trigger += q->alloc_thresh; > > + if (q->next_trigger >= q->nb_desc) { > > + q->next_trigger = q->alloc_thresh - 1; > > + q->next_alloc = 0; > > + } > > + } > > + > > + return nb_rcv; > > +} > > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 13/15] fm10k: add function to set vlan 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) ` (2 subsequent siblings) 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index a2fcd63..1cf6def 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -787,6 +787,20 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1389,6 +1403,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 14/15] fm10k: Add SRIOV-VF support 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 13/15] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-11 1:50 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Qiu, Michael 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translate the request to correct units. For some functionalities that are unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 1cf6def..8969d84 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1563,6 +1563,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 15/15] fm10k: add PF and VF interrupt handling function 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) @ 2015-02-11 1:31 ` Chen Jing D(Mark) 2015-02-11 1:50 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Qiu, Michael 15 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-11 1:31 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add functions to enable PF/VF interrupt. 2. Add function to process error message passed from interrupt. 2. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 8969d84..37b417c 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1345,6 +1345,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 availiable, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_INIT_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_INIT_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_INIT_LOG(INFO, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1525,6 +1775,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1554,6 +1819,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (14 preceding siblings ...) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-02-11 1:50 ` Qiu, Michael 15 siblings, 0 replies; 147+ messages in thread From: Qiu, Michael @ 2015-02-11 1:50 UTC (permalink / raw) To: Chen, Jing D, dev On 2/11/2015 9:31 AM, Chen, Jing D wrote: > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > The patch set add poll mode driver for the host interface of Intel > fm10k series of silicons, which integrate NIC and switch > functionalities. The patch set include below features: > > 1. Basic RX/TX functions for PF/VF. > 2. Interrupt handling mechanism for PF/VF. > 3. per queue start/stop functions for PF/VF. > 4. Mailbox handling between PF/VF and PF/Switch Manager. > 5. Receive Side Scaling (RSS) for PF/VF. > 6. Scatter receive function for PF/VF. > 7. reta update/query for PF/VF. > 8. VLAN filter set for PF. > 9. Link status query for PF/VF. > > Change in v4: > - Change commit log to remove improper words. > > Changes in v3: > - Update base driver. > - Define several macros to pass base driver compile. > > Changes in v2: > - Merge 3 patches into 1 to configure fm10k compile environment. > - Rework on log code to follow style in ixgbe. > - Rework log message, remove redundant '\n' > - Update Copyright year from "2014" to "2015" > - Change base driver directory name from SHARED to base > - Add more description in log for patch "add PF and VF interrupt" > - Merge 2 patches into 1 to register fm10k driver > - Define macro to replace numeric for lower 32-bit mask. > > Jeff Shaw (15): > fm10k: add base driver > eal: add fm10k device id > fm10k: register fm10k pmd PF driver > Change config files to add fm10k into compile > fm10k: add reta update/requery functions > fm10k: add rx_queue_setup/release function > fm10k: add tx_queue_setup/release function > fm10k: add RX/TX single queue start/stop function > fm10k: add dev start/stop functions > fm10k: add receive and tranmit function > fm10k: add PF RSS support > fm10k: Add scatter receive function > fm10k: add function to set vlan > fm10k: Add SRIOV-VF support > fm10k: add PF and VF interrupt handling function > > config/common_bsdapp | 11 + > config/common_linuxapp | 11 + > lib/Makefile | 1 + > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > lib/librte_pmd_fm10k/Makefile | 96 + > lib/librte_pmd_fm10k/base/fm10k_api.c | 341 ++++ > lib/librte_pmd_fm10k/base/fm10k_api.h | 61 + > lib/librte_pmd_fm10k/base/fm10k_common.c | 572 ++++++ > lib/librte_pmd_fm10k/base/fm10k_common.h | 52 + > lib/librte_pmd_fm10k/base/fm10k_mbx.c | 2185 +++++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_mbx.h | 329 ++++ > lib/librte_pmd_fm10k/base/fm10k_osdep.h | 148 ++ > lib/librte_pmd_fm10k/base/fm10k_pf.c | 1992 +++++++++++++++++++++ > lib/librte_pmd_fm10k/base/fm10k_pf.h | 155 ++ > lib/librte_pmd_fm10k/base/fm10k_tlv.c | 914 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_tlv.h | 199 ++ > lib/librte_pmd_fm10k/base/fm10k_type.h | 937 ++++++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.c | 641 +++++++ > lib/librte_pmd_fm10k/base/fm10k_vf.h | 91 + > lib/librte_pmd_fm10k/fm10k.h | 293 +++ > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1868 +++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 78 + > lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ > mk/rte.app.mk | 4 + > 24 files changed, 11428 insertions(+), 0 deletions(-) > create mode 100644 lib/librte_pmd_fm10k/Makefile > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_api.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_common.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_mbx.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_osdep.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_pf.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_tlv.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_type.h > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.c > create mode 100644 lib/librte_pmd_fm10k/base/fm10k_vf.h > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > Acked-by: Michael Qiu <michael.qiu@intel.com> ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 02/15] eal: add fm10k device id 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) ` (12 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 03/15] fm10k: register fm10k pmd PF driver 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 02/15] eal: add fm10k device id Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) ` (11 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add init function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. 4. Add fm10k.h to define macros and basic data structure. 5. Add fm10k_logs.h to control log message output. 6. Add Makefile. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 95 ++++++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 343 +++++++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 ++++++++ 4 files changed, 740 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..1da84e9 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,95 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/base/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..1468040 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "base/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /**< don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..0b75299 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,343 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "base/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_INIT_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + + PMD_INIT_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_INIT_LOG(ERR, "Failed to register mailbox handler.err:%d", + err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_INIT_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..febd796 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_FM10K_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _FM10K_LOGS_H_ */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 04/15] Change config files to add fm10k into compile 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (10 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. 2. Change lib/Makefile to add fm10k driver into compile list. 3. Change mk/rte.app.mk to add fm10k lib into link. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 11 +++++++++++ config/common_linuxapp | 11 +++++++++++ lib/Makefile | 1 + mk/rte.app.mk | 4 ++++ 4 files changed, 27 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 9177db1..77e8a64 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index 2f9643b..b076b5d 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/lib/Makefile b/lib/Makefile index 0ffc982..b1f3860 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -43,6 +43,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 4294d9a..87d8763 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 05/15] fm10k: add reta update/requery functions 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) ` (9 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 162 +++++++++++++++++++++++++++++++++++ 1 files changed, 162 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0b75299..b3d4d79 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_INIT_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -119,6 +139,144 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -165,6 +323,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .link_update = fm10k_link_update, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 06/15] fm10k: add rx_queue_setup/release function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) ` (8 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 255 +++++++++++++++++++++++++++++++++++ 1 files changed, 255 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b3d4d79..63e8a65 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -186,6 +227,218 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -325,6 +578,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 07/15] fm10k: add tx_queue_setup/release function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) ` (7 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 205 +++++++++++++++++++++++++++++++++++ 1 files changed, 205 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 63e8a65..caaccf6 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -439,6 +481,167 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -580,6 +783,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 08/15] fm10k: add RX/TX single queue start/stop function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (6 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1) Address is 512B aligned 2) Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 59 +++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 222 +++++++++++++++++++++++++++++++++++ 2 files changed, 281 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 1468040..be990e5 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,63 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + /* use RTE_LOG directly to make sure this error is seen */ + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", __func__); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index caaccf6..4a893e1 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_INIT_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,118 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_INIT_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_INIT_LOG(ERR, "Invalid buffer address alignment :" + " %d", err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -781,6 +999,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 09/15] fm10k: add dev start/stop functions 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (5 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize RX queues. 2. Add function to initialize TX queues. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. 4. Add function to close mailbox service. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 224 +++++++++++++++++++++++++++++++++++ 1 files changed, 224 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 4a893e1..011b0bc 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,11 +44,14 @@ #define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +#define UINT64_LOWER_32BITS_MASK 0x00000000ffffffffULL /* Number of chars per uint32 type */ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +271,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -379,6 +474,125 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_INIT_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -993,8 +1207,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 10/15] fm10k: add receive and tranmit function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 11/15] fm10k: add PF RSS support Chen Jing D(Mark) ` (4 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. 3. Change Makefile to compile new file fm10k_rxtx.c Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 1 + lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 299 +++++++++++++++++++++++++++++++++++ 4 files changed, 309 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index 1da84e9..8ab788c 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -79,6 +79,7 @@ VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index be990e5..a9b19cd 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -280,4 +280,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 011b0bc..af68ad7 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1245,6 +1245,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_INIT_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..1d95bff --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,299 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "base/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| TIME TAG |"); + PMD_RX_LOG(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if ((d->d.staterr & (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if ((d->d.staterr & (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (d->d.staterr & FM10K_RXD_STATUS_HBO) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (d->d.staterr & FM10K_RXD_STATUS_RXE) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (count > 0) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 11/15] fm10k: add PF RSS support 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) ` (3 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index af68ad7..d434b02 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -270,6 +270,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not" + "supported", hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -359,6 +431,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1165,6 +1239,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1233,6 +1387,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 12/15] fm10k: Add scatter receive function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 11/15] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 13/15] fm10k: add function to set vlan Chen Jing D(Mark) ` (2 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index a9b19cd..8741936 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -285,6 +285,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index d434b02..a2fcd63 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -423,6 +423,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -431,6 +438,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1404,6 +1416,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 1d95bff..7591a99 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -177,6 +177,134 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 13/15] fm10k: add function to set vlan 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index a2fcd63..1cf6def 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -787,6 +787,20 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1389,6 +1403,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 14/15] fm10k: Add SRIOV-VF support 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 13/15] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translate the request to correct units. For some functionalities that are unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 1cf6def..8969d84 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1563,6 +1563,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 15/15] fm10k: add PF and VF interrupt handling function 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) @ 2015-02-10 7:02 ` Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-10 7:02 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add functions to enable PF/VF interrupt. 2. Add function to process error message passed from interrupt. 2. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 8969d84..37b417c 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1345,6 +1345,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 availiable, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_INIT_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_INIT_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_INIT_LOG(INFO, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1525,6 +1775,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1554,6 +1819,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 02/15] fm10k: add fm10k device id 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) ` (12 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 03/15] fm10k: register fm10k pmd PF driver 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 02/15] fm10k: add fm10k device id Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) ` (11 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add init function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. 4. Add fm10k.h to define macros and basic data structure. 5. Add fm10k_logs.h to control log message output. 6. Add Makefile. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 95 ++++++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 343 +++++++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 78 ++++++++ 4 files changed, 740 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..1da84e9 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,95 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/base/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..1468040 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "base/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /**< don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..0b75299 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,343 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "base/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_INIT_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_INIT_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + + PMD_INIT_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_INIT_LOG(ERR, "Failed to register mailbox handler.err:%d", + err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_INIT_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..febd796 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_FM10K_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _FM10K_LOGS_H_ */ -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 04/15] Change config files to add fm10k into compile 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (10 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. 2. Change lib/Makefile to add fm10k driver into compile list. 3. Change mk/rte.app.mk to add fm10k lib into link. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 11 +++++++++++ config/common_linuxapp | 11 +++++++++++ lib/Makefile | 1 + mk/rte.app.mk | 4 ++++ 4 files changed, 27 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 9177db1..77e8a64 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index 2f9643b..b076b5d 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,17 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG_INIT=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_DRIVER=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/lib/Makefile b/lib/Makefile index 0ffc982..b1f3860 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -43,6 +43,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 4294d9a..87d8763 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 05/15] fm10k: add reta update/requery functions 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) ` (9 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 162 +++++++++++++++++++++++++++++++++++ 1 files changed, 162 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 0b75299..b3d4d79 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_INIT_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -119,6 +139,144 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_INIT_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -165,6 +323,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .link_update = fm10k_link_update, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 06/15] fm10k: add rx_queue_setup/release function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) ` (8 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 255 +++++++++++++++++++++++++++++++++++ 1 files changed, 255 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b3d4d79..63e8a65 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -186,6 +227,218 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ + if (!mempool_element_size_valid(mp)) { + PMD_INIT_LOG(ERR, "Error : Mempool element size is too small"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -325,6 +578,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 07/15] fm10k: add tx_queue_setup/release function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) ` (7 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 205 +++++++++++++++++++++++++++++++++++ 1 files changed, 205 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 63e8a65..caaccf6 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_INIT_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + if (q) { + PMD_INIT_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -439,6 +481,167 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_INIT_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_INIT_FUNC_TRACE(); + + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -580,6 +783,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 08/15] fm10k: add RX/TX single queue start/stop function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (6 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1) Address is 512B aligned 2) Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 59 +++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 222 +++++++++++++++++++++++++++++++++++ 2 files changed, 281 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 1468040..be990e5 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,63 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + /* use RTE_LOG directly to make sure this error is seen */ + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", __func__); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index caaccf6..4a893e1 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_INIT_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_INIT_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,118 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_INIT_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_INIT_LOG(ERR, "Invalid buffer address alignment :" + " %d", err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -781,6 +999,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 09/15] fm10k: add dev start/stop functions 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (5 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize RX queues. 2. Add function to initialize TX queues. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. 4. Add function to close mailbox service. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 224 +++++++++++++++++++++++++++++++++++ 1 files changed, 224 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 4a893e1..011b0bc 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,11 +44,14 @@ #define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +#define UINT64_LOWER_32BITS_MASK 0x00000000ffffffffULL /* Number of chars per uint32 type */ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +271,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & UINT64_LOWER_32BITS_MASK); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -379,6 +474,125 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_INIT_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_INIT_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -993,8 +1207,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 10/15] fm10k: add receive and tranmit function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 11/15] fm10k: add PF RSS support Chen Jing D(Mark) ` (4 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. 3. Change Makefile to compile new file fm10k_rxtx.c Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 1 + lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 299 +++++++++++++++++++++++++++++++++++ 4 files changed, 309 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile index 1da84e9..8ab788c 100644 --- a/lib/librte_pmd_fm10k/Makefile +++ b/lib/librte_pmd_fm10k/Makefile @@ -79,6 +79,7 @@ VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/base # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index be990e5..a9b19cd 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -280,4 +280,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 011b0bc..af68ad7 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1245,6 +1245,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_INIT_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..1d95bff --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,299 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "base/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_RX_LOG(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); + PMD_RX_LOG(DEBUG, "| TIME TAG |"); + PMD_RX_LOG(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_RX_LOG(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if ((d->d.staterr & (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if ((d->d.staterr & (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (d->d.staterr & FM10K_RXD_STATUS_HBO) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (d->d.staterr & FM10K_RXD_STATUS_RXE) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (count > 0) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 11/15] fm10k: add PF RSS support 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) ` (3 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index af68ad7..d434b02 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -270,6 +270,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not" + "supported", hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -359,6 +431,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1165,6 +1239,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1233,6 +1387,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 12/15] fm10k: Add scatter receive function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 11/15] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 13/15] fm10k: add function to set vlan Chen Jing D(Mark) ` (2 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index a9b19cd..8741936 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -285,6 +285,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index d434b02..a2fcd63 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -423,6 +423,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -431,6 +438,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1404,6 +1416,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 1d95bff..7591a99 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -177,6 +177,134 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 13/15] fm10k: add function to set vlan 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-04 10:41 ` [dpdk-dev] [PATCH v2 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index a2fcd63..1cf6def 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -787,6 +787,20 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1389,6 +1403,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_reset = fm10k_stats_reset, .link_update = fm10k_link_update, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 14/15] fm10k: Add SRIOV-VF support 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 13/15] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-02-04 10:40 ` Chen Jing D(Mark) 2015-02-04 10:41 ` [dpdk-dev] [PATCH v2 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:40 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translate the request to correct units. For some functionalities that are unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 1cf6def..8969d84 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1563,6 +1563,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 15/15] fm10k: add PF and VF interrupt handling function 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) @ 2015-02-04 10:41 ` Chen Jing D(Mark) 14 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-02-04 10:41 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add functions to enable PF/VF interrupt. 2. Add function to process error message passed from interrupt. 2. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 8969d84..37b417c 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1345,6 +1345,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 availiable, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_INIT_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIx64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_INIT_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_INIT_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_INIT_LOG(INFO, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1525,6 +1775,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1554,6 +1819,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 02/18] Change config/ files to add macros for fm10k 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 01/18] fm10k: add base driver Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files Chen Jing D(Mark) ` (16 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Change config/common_bsdapp and config/common_linuxapp, add macros to control fm10k pmd driver compile for linux and bsd. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- config/common_bsdapp | 9 +++++++++ config/common_linuxapp | 9 +++++++++ 2 files changed, 18 insertions(+), 0 deletions(-) diff --git a/config/common_bsdapp b/config/common_bsdapp index 9177db1..b2fa259 100644 --- a/config/common_bsdapp +++ b/config/common_bsdapp @@ -182,6 +182,15 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y diff --git a/config/common_linuxapp b/config/common_linuxapp index 2f9643b..3705e80 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -180,6 +180,15 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4 CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1 # +# Compile burst-oriented FM10K PMD +# +CONFIG_RTE_LIBRTE_FM10K_PMD=y +CONFIG_RTE_LIBRTE_FM10K_DEBUG=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_RX=n +CONFIG_RTE_LIBRTE_FM10K_DEBUG_TX=n +CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y + +# # Compile burst-oriented Cisco ENIC PMD driver # CONFIG_RTE_LIBRTE_ENIC_PMD=y -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 01/18] fm10k: add base driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 02/18] Change config/ files to add macros for fm10k Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-31 14:02 ` Neil Horman 2015-02-01 13:01 ` David Marchand 2015-01-30 5:07 ` [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id Chen Jing D(Mark) ` (15 subsequent siblings) 18 siblings, 2 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Define macros and basic data structure. Define rte_log wrapper functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/Makefile | 96 ++++++++++++++++ lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++++++++++++++++ lib/librte_pmd_fm10k/fm10k_logs.h | 66 +++++++++++ 3 files changed, 386 insertions(+), 0 deletions(-) create mode 100644 lib/librte_pmd_fm10k/Makefile create mode 100644 lib/librte_pmd_fm10k/fm10k.h create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile new file mode 100644 index 0000000..3d76387 --- /dev/null +++ b/lib/librte_pmd_fm10k/Makefile @@ -0,0 +1,96 @@ +# BSD LICENSE +# +# Copyright(c) 2013-2014 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_fm10k.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +ifeq ($(CC), icc) +# +# CFLAGS for icc +# +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 + +else ifeq ($(CC), clang) +# +## CFLAGS for clang +# +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers + +else +# +# CFLAGS for gcc +# +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) -le 3 && echo 1), 1) +CFLAGS += -Wno-deprecated +endif +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +endif + +# +# Add extra flags for base driver source files to disable warnings in them +# +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard $(RTE_SDK)/lib/librte_pmd_fm10k/SHARED/*.c))) +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/SHARED + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c + +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h new file mode 100644 index 0000000..9b2d3da --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -0,0 +1,224 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_H_ +#define _FM10K_H_ + +#include <stdint.h> +#include <rte_mbuf.h> +#include <rte_mempool.h> +#include <rte_malloc.h> +#include <rte_spinlock.h> +#include "fm10k_logs.h" +#include "SHARED/fm10k_type.h" + +/* descriptor ring base addresses must be aligned to the following */ +#define FM10K_ALIGN_RX_DESC 128 +#define FM10K_ALIGN_TX_DESC 128 + +/* The maximum packet size that FM10K supports */ +#define FM10K_MAX_PKT_SIZE (15 * 1024) + +/* Minimum size of RX buffer FM10K supported */ +#define FM10K_MIN_RX_BUF_SIZE 256 + +/* The maximum of SRIOV VFs per port supported */ +#define FM10K_MAX_VF_NUM 64 + +/* number of descriptors must be a multiple of the following */ +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE + +/* maximum size of descriptor rings */ +#define FM10K_MAX_RX_RING_SZ (512 * 1024) +#define FM10K_MAX_TX_RING_SZ (512 * 1024) + +/* minimum and maximum number of descriptors in a ring */ +#define FM10K_MIN_RX_DESC 32 +#define FM10K_MIN_TX_DESC 32 +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union fm10k_rx_desc)) +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct fm10k_tx_desc)) + +/* + * byte aligment for HW RX data buffer + * Datasheet requires RX buffer addresses shall either be 512-byte aligned or + * be 8-byte aligned but without crossing host memory pages (4KB alignment + * boundaries). Satisfy first option. + */ +#define FM10K_RX_DATABUF_ALIGN 512 + +/* + * threshold default, min, max, and divisor constraints + * the configured values must satisfy the following: + * MIN <= value <= MAX + * DIV % value == 0 + */ +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) + +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 + +#define FM10K_DEFAULT_RX_PTHRESH 8 +#define FM10K_DEFAULT_RX_HTHRESH 8 +#define FM10K_DEFAULT_RX_WTHRESH 0 + +#define FM10K_DEFAULT_TX_PTHRESH 32 +#define FM10K_DEFAULT_TX_HTHRESH 0 +#define FM10K_DEFAULT_TX_WTHRESH 0 + +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 +#define FM10K_TX_RS_THRESH_MIN(txq) 1 +#define FM10K_TX_RS_THRESH_MAX(txq) \ + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) + +#define FM10K_VLAN_TAG_SIZE 4 + +struct fm10k_dev_info { + volatile uint32_t enable; + volatile uint32_t glort; + /* Protect the mailbox to avoid race condition */ + rte_spinlock_t mbx_lock; +}; + +/* + * Structure to store private data for each driver instance. + */ +struct fm10k_adapter { + struct fm10k_hw hw; + struct fm10k_hw_stats stats; + struct fm10k_dev_info info; +}; + +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct fm10k_adapter *)adapter)->hw) + +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ + (&((struct fm10k_adapter *)adapter)->stats) + +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ + (&((struct fm10k_adapter *)adapter)->info) + +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) + +struct fm10k_rx_queue { + struct rte_mempool *mp; + struct rte_mbuf **sw_ring; + volatile union fm10k_rx_desc *hw_ring; + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint64_t hw_ring_phys_addr; + uint16_t next_dd; + uint16_t next_alloc; + uint16_t next_trigger; + uint16_t alloc_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint16_t queue_id; + uint8_t port_id; + uint8_t drop_en; + uint8_t rx_deferred_start; /** < don't start this queue in dev start. */ +}; + +/* + * a FIFO is used to track which descriptors have their RS bit set for Tx + * queues which are configured to allow multiple descriptors per packet + */ +struct fifo { + uint16_t *list; + uint16_t *head; + uint16_t *tail; + uint16_t *endp; +}; + +struct fm10k_tx_queue { + struct rte_mbuf **sw_ring; + struct fm10k_tx_desc *hw_ring; + uint64_t hw_ring_phys_addr; + struct fifo rs_tracker; + uint16_t last_free; + uint16_t next_free; + uint16_t nb_free; + uint16_t nb_used; + uint16_t free_trigger; + uint16_t free_thresh; + uint16_t rs_thresh; + volatile uint32_t *tail_ptr; + uint16_t nb_desc; + uint8_t port_id; + uint8_t tx_deferred_start; /** < don't start this queue in dev start. */ + uint16_t queue_id; +}; + +#define MBUF_DMA_ADDR(mb) \ + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) + +/* enforce 512B alignment on default Rx DMA addresses */ +#define MBUF_DMA_ADDR_DEFAULT(mb) \ + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), 512)) + +static inline void fifo_reset(struct fifo *fifo, uint32_t len) +{ + fifo->head = fifo->tail = fifo->list; + fifo->endp = fifo->list + len; +} + +static inline void fifo_insert(struct fifo *fifo, uint16_t val) +{ + *fifo->head = val; + if (++fifo->head == fifo->endp) + fifo->head = fifo->list; +} + +/* do not worry about list being empty since we only check it once we know + * we have used enough descriptors to set the RS bit at least once */ +static inline uint16_t fifo_peek(struct fifo *fifo) +{ + return *fifo->tail; +} + +static inline uint16_t fifo_remove(struct fifo *fifo) +{ + uint16_t val; + val = *fifo->tail; + if (++fifo->tail == fifo->endp) + fifo->tail = fifo->list; + return val; +} +#endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c new file mode 100644 index 0000000..e69de29 diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h b/lib/librte_pmd_fm10k/fm10k_logs.h new file mode 100644 index 0000000..0b4cd24 --- /dev/null +++ b/lib/librte_pmd_fm10k/fm10k_logs.h @@ -0,0 +1,66 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _FM10K_LOGS_H_ +#define _FM10K_LOGS_H_ + +#include <rte_log.h> + +#ifdef RTE_LIBRTE_FM10K_DEBUG +#define PMD_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#define PMD_FUNC_TRACE() PMD_LOG(DEBUG, " >>") +#else +#define PMD_LOG(level, fmt, args...) do { } while (0) +#define PMD_FUNC_TRACE() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX +#define PMD_LOG_RX(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#define PMD_FUNC_TRACE_RX() PMD_LOG_RX(DEBUG, " >>") +#else +#define PMD_LOG_RX(level, fmt, args...) do { } while (0) +#define PMD_FUNC_TRACE_RX() do { } while (0) +#endif + +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX +#define PMD_LOG_TX(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#define PMD_FUNC_TRACE_TX() PMD_LOG_TX(DEBUG, " >>") +#else +#define PMD_LOG_TX(level, fmt, args...) do { } while (0) +#define PMD_FUNC_TRACE_TX() do { } while (0) +#endif + +#endif /* _FM10K_LOGS_H_ */ diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c new file mode 100644 index 0000000..e69de29 -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files 2015-01-30 5:07 ` [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files Chen Jing D(Mark) @ 2015-01-31 14:02 ` Neil Horman 2015-02-02 5:34 ` Chen, Jing D 2015-02-01 13:01 ` David Marchand 1 sibling, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-01-31 14:02 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:19PM +0800, Chen Jing D(Mark) wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Define macros and basic data structure. > Define rte_log wrapper functions. > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_pmd_fm10k/Makefile | 96 ++++++++++++++++ > lib/librte_pmd_fm10k/fm10k.h | 224 +++++++++++++++++++++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 66 +++++++++++ > 3 files changed, 386 insertions(+), 0 deletions(-) > create mode 100644 lib/librte_pmd_fm10k/Makefile > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > Why are you adding empty files? Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files 2015-01-31 14:02 ` Neil Horman @ 2015-02-02 5:34 ` Chen, Jing D [not found] ` <20150202133848.GA21700@hmsreliant.think-freely.org> 0 siblings, 1 reply; 147+ messages in thread From: Chen, Jing D @ 2015-02-02 5:34 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hi Neil, > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Saturday, January 31, 2015 10:02 PM > To: Chen, Jing D > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files > > On Fri, Jan 30, 2015 at 01:07:19PM +0800, Chen Jing D(Mark) wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > Define macros and basic data structure. > > Define rte_log wrapper functions. > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > lib/librte_pmd_fm10k/Makefile | 96 ++++++++++++++++ > > lib/librte_pmd_fm10k/fm10k.h | 224 > +++++++++++++++++++++++++++++++++++++ > > lib/librte_pmd_fm10k/fm10k_logs.h | 66 +++++++++++ > > 3 files changed, 386 insertions(+), 0 deletions(-) > > create mode 100644 lib/librte_pmd_fm10k/Makefile > > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > > > Why are you adding empty files? The 2 ".c" files are empty while the 2 ".h" files include code. "Makefile" includes rules to compile the ".c" files, I don't like to break the compile for every single patch, that's why the 2 ".c" files are added in this patch. > > Neil Thanks for your comments. Mark ^ permalink raw reply [flat|nested] 147+ messages in thread
[parent not found: <20150202133848.GA21700@hmsreliant.think-freely.org>]
* Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files [not found] ` <20150202133848.GA21700@hmsreliant.think-freely.org> @ 2015-02-03 6:47 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-03 6:47 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hi Neil, > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Monday, February 02, 2015 9:39 PM > To: Chen, Jing D > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files > > On Mon, Feb 02, 2015 at 05:34:43AM +0000, Chen, Jing D wrote: > > Hi Neil, > > > > > -----Original Message----- > > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > Sent: Saturday, January 31, 2015 10:02 PM > > > To: Chen, Jing D > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files > > > > > > On Fri, Jan 30, 2015 at 01:07:19PM +0800, Chen Jing D(Mark) wrote: > > > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > > > > > Define macros and basic data structure. > > > > Define rte_log wrapper functions. > > > > > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > > > --- > > > > lib/librte_pmd_fm10k/Makefile | 96 ++++++++++++++++ > > > > lib/librte_pmd_fm10k/fm10k.h | 224 > > > +++++++++++++++++++++++++++++++++++++ > > > > lib/librte_pmd_fm10k/fm10k_logs.h | 66 +++++++++++ > > > > 3 files changed, 386 insertions(+), 0 deletions(-) > > > > create mode 100644 lib/librte_pmd_fm10k/Makefile > > > > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > > > > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > > > > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > > > > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > > > > > > > Why are you adding empty files? > > > > The 2 ".c" files are empty while the 2 ".h" files include code. "Makefile" > includes rules to > > compile the ".c" files, I don't like to break the compile for every single patch, > that's why > > the 2 ".c" files are added in this patch. > > > That doesn't really answer the question. Theres no need to add empty files > here. Just add the headers alone and add the empy files on the first commit > where you have code to put in them. Adjust the makefile so that you add > them > into the compilation in the same commit that you populate the file to avoid a > FTBFS error. > Neil Got you. I'll add the content with new files. Thanks! > > > > > > > Neil > > > > Thanks for your comments. > > Mark > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files 2015-01-30 5:07 ` [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files Chen Jing D(Mark) 2015-01-31 14:02 ` Neil Horman @ 2015-02-01 13:01 ` David Marchand 1 sibling, 0 replies; 147+ messages in thread From: David Marchand @ 2015-02-01 13:01 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 6:07 AM, Chen Jing D(Mark) <jing.d.chen@intel.com> wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Define macros and basic data structure. > Define rte_log wrapper functions. > This comment applies to the logs macro (and the rest of the patchset). - don't use a build option for logs to be displayed, especially if these are init messages or error messages that prevent the pmd from working - you can remove this "Use RTE_LOG directly to make sure this error is seen." in fm10k_rx_queue_setup if you use a "init" macro that is not under a build option - don't use \n in logs, only one is enough Please, check the cleanup work that has been done in other Intel pmd (for example, ixgbe). I would really prefer we have consistent logs across dpdk. -- David Marchand > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_pmd_fm10k/Makefile | 96 ++++++++++++++++ > lib/librte_pmd_fm10k/fm10k.h | 224 > +++++++++++++++++++++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 66 +++++++++++ > 3 files changed, 386 insertions(+), 0 deletions(-) > create mode 100644 lib/librte_pmd_fm10k/Makefile > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > > diff --git a/lib/librte_pmd_fm10k/Makefile b/lib/librte_pmd_fm10k/Makefile > new file mode 100644 > index 0000000..3d76387 > --- /dev/null > +++ b/lib/librte_pmd_fm10k/Makefile > @@ -0,0 +1,96 @@ > +# BSD LICENSE > +# > +# Copyright(c) 2013-2014 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without > +# modification, are permitted provided that the following conditions > +# are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + > +include $(RTE_SDK)/mk/rte.vars.mk > + > +# > +# library name > +# > +LIB = librte_pmd_fm10k.a > + > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) > + > +ifeq ($(CC), icc) > +# > +# CFLAGS for icc > +# > +CFLAGS_BASE_DRIVER = -wd174 -wd593 -wd869 -wd981 -wd2259 > + > +else ifeq ($(CC), clang) > +# > +## CFLAGS for clang > +# > +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value > +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args > +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable > +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers > + > +else > +# > +# CFLAGS for gcc > +# > +ifneq ($(shell test $(GCC_MAJOR_VERSION) -le 4 -a $(GCC_MINOR_VERSION) > -le 3 && echo 1), 1) > +CFLAGS += -Wno-deprecated > +endif > +CFLAGS_BASE_DRIVER = -Wno-unused-parameter -Wno-unused-value > +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -Wno-format-extra-args > +CFLAGS_BASE_DRIVER += -Wno-unused-variable -Wno-unused-but-set-variable > +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers > +endif > + > +# > +# Add extra flags for base driver source files to disable warnings in them > +# > +BASE_DRIVER_OBJS=$(patsubst %.c,%.o,$(notdir $(wildcard > $(RTE_SDK)/lib/librte_pmd_fm10k/SHARED/*.c))) > +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval > CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) > + > +VPATH += $(RTE_SDK)/lib/librte_pmd_fm10k/SHARED > + > +# > +# all source are stored in SRCS-y > +# > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_ethdev.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_rxtx.c > + > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_pf.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_tlv.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_common.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_mbx.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_vf.c > +SRCS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k_api.c > + > +# this lib depends upon: > +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_eal lib/librte_ether > +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_mempool > lib/librte_mbuf > +DEPDIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += lib/librte_net lib/librte_malloc > + > +include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h > new file mode 100644 > index 0000000..9b2d3da > --- /dev/null > +++ b/lib/librte_pmd_fm10k/fm10k.h > @@ -0,0 +1,224 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#ifndef _FM10K_H_ > +#define _FM10K_H_ > + > +#include <stdint.h> > +#include <rte_mbuf.h> > +#include <rte_mempool.h> > +#include <rte_malloc.h> > +#include <rte_spinlock.h> > +#include "fm10k_logs.h" > +#include "SHARED/fm10k_type.h" > + > +/* descriptor ring base addresses must be aligned to the following */ > +#define FM10K_ALIGN_RX_DESC 128 > +#define FM10K_ALIGN_TX_DESC 128 > + > +/* The maximum packet size that FM10K supports */ > +#define FM10K_MAX_PKT_SIZE (15 * 1024) > + > +/* Minimum size of RX buffer FM10K supported */ > +#define FM10K_MIN_RX_BUF_SIZE 256 > + > +/* The maximum of SRIOV VFs per port supported */ > +#define FM10K_MAX_VF_NUM 64 > + > +/* number of descriptors must be a multiple of the following */ > +#define FM10K_MULT_RX_DESC FM10K_REQ_RX_DESCRIPTOR_MULTIPLE > +#define FM10K_MULT_TX_DESC FM10K_REQ_TX_DESCRIPTOR_MULTIPLE > + > +/* maximum size of descriptor rings */ > +#define FM10K_MAX_RX_RING_SZ (512 * 1024) > +#define FM10K_MAX_TX_RING_SZ (512 * 1024) > + > +/* minimum and maximum number of descriptors in a ring */ > +#define FM10K_MIN_RX_DESC 32 > +#define FM10K_MIN_TX_DESC 32 > +#define FM10K_MAX_RX_DESC (FM10K_MAX_RX_RING_SZ / sizeof(union > fm10k_rx_desc)) > +#define FM10K_MAX_TX_DESC (FM10K_MAX_TX_RING_SZ / sizeof(struct > fm10k_tx_desc)) > + > +/* > + * byte aligment for HW RX data buffer > + * Datasheet requires RX buffer addresses shall either be 512-byte > aligned or > + * be 8-byte aligned but without crossing host memory pages (4KB alignment > + * boundaries). Satisfy first option. > + */ > +#define FM10K_RX_DATABUF_ALIGN 512 > + > +/* > + * threshold default, min, max, and divisor constraints > + * the configured values must satisfy the following: > + * MIN <= value <= MAX > + * DIV % value == 0 > + */ > +#define FM10K_RX_FREE_THRESH_DEFAULT(rxq) 32 > +#define FM10K_RX_FREE_THRESH_MIN(rxq) 1 > +#define FM10K_RX_FREE_THRESH_MAX(rxq) ((rxq)->nb_desc - 1) > +#define FM10K_RX_FREE_THRESH_DIV(rxq) ((rxq)->nb_desc) > + > +#define FM10K_TX_FREE_THRESH_DEFAULT(txq) 32 > +#define FM10K_TX_FREE_THRESH_MIN(txq) 1 > +#define FM10K_TX_FREE_THRESH_MAX(txq) ((txq)->nb_desc - 3) > +#define FM10K_TX_FREE_THRESH_DIV(txq) 0 > + > +#define FM10K_DEFAULT_RX_PTHRESH 8 > +#define FM10K_DEFAULT_RX_HTHRESH 8 > +#define FM10K_DEFAULT_RX_WTHRESH 0 > + > +#define FM10K_DEFAULT_TX_PTHRESH 32 > +#define FM10K_DEFAULT_TX_HTHRESH 0 > +#define FM10K_DEFAULT_TX_WTHRESH 0 > + > +#define FM10K_TX_RS_THRESH_DEFAULT(txq) 32 > +#define FM10K_TX_RS_THRESH_MIN(txq) 1 > +#define FM10K_TX_RS_THRESH_MAX(txq) \ > + RTE_MIN(((txq)->nb_desc - 2), (txq)->free_thresh) > +#define FM10K_TX_RS_THRESH_DIV(txq) ((txq)->nb_desc) > + > +#define FM10K_VLAN_TAG_SIZE 4 > + > +struct fm10k_dev_info { > + volatile uint32_t enable; > + volatile uint32_t glort; > + /* Protect the mailbox to avoid race condition */ > + rte_spinlock_t mbx_lock; > +}; > + > +/* > + * Structure to store private data for each driver instance. > + */ > +struct fm10k_adapter { > + struct fm10k_hw hw; > + struct fm10k_hw_stats stats; > + struct fm10k_dev_info info; > +}; > + > +#define FM10K_DEV_PRIVATE_TO_HW(adapter) \ > + (&((struct fm10k_adapter *)adapter)->hw) > + > +#define FM10K_DEV_PRIVATE_TO_STATS(adapter) \ > + (&((struct fm10k_adapter *)adapter)->stats) > + > +#define FM10K_DEV_PRIVATE_TO_INFO(adapter) \ > + (&((struct fm10k_adapter *)adapter)->info) > + > +#define FM10K_DEV_PRIVATE_TO_MBXLOCK(adapter) \ > + (&(((struct fm10k_adapter *)adapter)->info.mbx_lock)) > + > +struct fm10k_rx_queue { > + struct rte_mempool *mp; > + struct rte_mbuf **sw_ring; > + volatile union fm10k_rx_desc *hw_ring; > + struct rte_mbuf *pkt_first_seg; /**< First segment of current > packet. */ > + struct rte_mbuf *pkt_last_seg; /**< Last segment of current > packet. */ > + uint64_t hw_ring_phys_addr; > + uint16_t next_dd; > + uint16_t next_alloc; > + uint16_t next_trigger; > + uint16_t alloc_thresh; > + volatile uint32_t *tail_ptr; > + uint16_t nb_desc; > + uint16_t queue_id; > + uint8_t port_id; > + uint8_t drop_en; > + uint8_t rx_deferred_start; /** < don't start this queue in dev > start. */ > +}; > + > +/* > + * a FIFO is used to track which descriptors have their RS bit set for Tx > + * queues which are configured to allow multiple descriptors per packet > + */ > +struct fifo { > + uint16_t *list; > + uint16_t *head; > + uint16_t *tail; > + uint16_t *endp; > +}; > + > +struct fm10k_tx_queue { > + struct rte_mbuf **sw_ring; > + struct fm10k_tx_desc *hw_ring; > + uint64_t hw_ring_phys_addr; > + struct fifo rs_tracker; > + uint16_t last_free; > + uint16_t next_free; > + uint16_t nb_free; > + uint16_t nb_used; > + uint16_t free_trigger; > + uint16_t free_thresh; > + uint16_t rs_thresh; > + volatile uint32_t *tail_ptr; > + uint16_t nb_desc; > + uint8_t port_id; > + uint8_t tx_deferred_start; /** < don't start this queue in dev > start. */ > + uint16_t queue_id; > +}; > + > +#define MBUF_DMA_ADDR(mb) \ > + ((uint64_t) ((mb)->buf_physaddr + (mb)->data_off)) > + > +/* enforce 512B alignment on default Rx DMA addresses */ > +#define MBUF_DMA_ADDR_DEFAULT(mb) \ > + ((uint64_t) RTE_ALIGN(((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM), > 512)) > + > +static inline void fifo_reset(struct fifo *fifo, uint32_t len) > +{ > + fifo->head = fifo->tail = fifo->list; > + fifo->endp = fifo->list + len; > +} > + > +static inline void fifo_insert(struct fifo *fifo, uint16_t val) > +{ > + *fifo->head = val; > + if (++fifo->head == fifo->endp) > + fifo->head = fifo->list; > +} > + > +/* do not worry about list being empty since we only check it once we know > + * we have used enough descriptors to set the RS bit at least once */ > +static inline uint16_t fifo_peek(struct fifo *fifo) > +{ > + return *fifo->tail; > +} > + > +static inline uint16_t fifo_remove(struct fifo *fifo) > +{ > + uint16_t val; > + val = *fifo->tail; > + if (++fifo->tail == fifo->endp) > + fifo->tail = fifo->list; > + return val; > +} > +#endif > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c > b/lib/librte_pmd_fm10k/fm10k_ethdev.c > new file mode 100644 > index 0000000..e69de29 > diff --git a/lib/librte_pmd_fm10k/fm10k_logs.h > b/lib/librte_pmd_fm10k/fm10k_logs.h > new file mode 100644 > index 0000000..0b4cd24 > --- /dev/null > +++ b/lib/librte_pmd_fm10k/fm10k_logs.h > @@ -0,0 +1,66 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#ifndef _FM10K_LOGS_H_ > +#define _FM10K_LOGS_H_ > + > +#include <rte_log.h> > + > +#ifdef RTE_LIBRTE_FM10K_DEBUG > +#define PMD_LOG(level, fmt, args...) \ > + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) > +#define PMD_FUNC_TRACE() PMD_LOG(DEBUG, " >>") > +#else > +#define PMD_LOG(level, fmt, args...) do { } while (0) > +#define PMD_FUNC_TRACE() do { } while (0) > +#endif > + > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > +#define PMD_LOG_RX(level, fmt, args...) \ > + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) > +#define PMD_FUNC_TRACE_RX() PMD_LOG_RX(DEBUG, " >>") > +#else > +#define PMD_LOG_RX(level, fmt, args...) do { } while (0) > +#define PMD_FUNC_TRACE_RX() do { } while (0) > +#endif > + > +#ifdef RTE_LIBRTE_FM10K_DEBUG_TX > +#define PMD_LOG_TX(level, fmt, args...) \ > + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) > +#define PMD_FUNC_TRACE_TX() PMD_LOG_TX(DEBUG, " >>") > +#else > +#define PMD_LOG_TX(level, fmt, args...) do { } while (0) > +#define PMD_FUNC_TRACE_TX() do { } while (0) > +#endif > + > +#endif /* _FM10K_LOGS_H_ */ > diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c > b/lib/librte_pmd_fm10k/fm10k_rxtx.c > new file mode 100644 > index 0000000..e69de29 > -- > 1.7.7.6 > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (2 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-31 14:19 ` Neil Horman 2015-01-30 5:07 ` [dpdk-dev] [PATCH 05/18] fm10k: Add code to register fm10k pmd PF driver Chen Jing D(Mark) ` (14 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k device ID list into rte_pci_dev_ids.h. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index c922de9..f54800e 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -132,6 +132,14 @@ #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) #endif +#ifndef RTE_PCI_DEV_ID_DECL_FM10K +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) +#endif + +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) +#endif + #ifndef PCI_VENDOR_ID_INTEL /** Vendor ID used by Intel devices */ #define PCI_VENDOR_ID_INTEL 0x8086 @@ -474,6 +482,12 @@ RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_B) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_QSFP_C) RTE_PCI_DEV_ID_DECL_I40E(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_BASE_T) +/*************** Physical FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_PF 0x15A4 + +RTE_PCI_DEV_ID_DECL_FM10K(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_PF) + /****************** Virtual IGB devices from e1000_hw.h ******************/ #define E1000_DEV_ID_82576_VF 0x10CA @@ -526,6 +540,12 @@ RTE_PCI_DEV_ID_DECL_VIRTIO(PCI_VENDOR_ID_QUMRANET, QUMRANET_DEV_ID_VIRTIO) RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) +/*************** Virtual FM10K devices from fm10k_type.h ***************/ + +#define FM10K_DEV_ID_VF 0x15A5 + +RTE_PCI_DEV_ID_DECL_FM10KVF(PCI_VENDOR_ID_INTEL, FM10K_DEV_ID_VF) + /* * Undef all RTE_PCI_DEV_ID_DECL_* here. */ @@ -538,3 +558,5 @@ RTE_PCI_DEV_ID_DECL_VMXNET3(PCI_VENDOR_ID_VMWARE, VMWARE_DEV_ID_VMXNET3) #undef RTE_PCI_DEV_ID_DECL_I40EVF #undef RTE_PCI_DEV_ID_DECL_VIRTIO #undef RTE_PCI_DEV_ID_DECL_VMXNET3 +#undef RTE_PCI_DEV_ID_DECL_FM10K +#undef RTE_PCI_DEV_ID_DECL_FM10KVF \ No newline at end of file -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-30 5:07 ` [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id Chen Jing D(Mark) @ 2015-01-31 14:19 ` Neil Horman 2015-01-31 16:07 ` David Marchand 2015-02-02 7:54 ` Chen, Jing D 0 siblings, 2 replies; 147+ messages in thread From: Neil Horman @ 2015-01-31 14:19 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:20PM +0800, Chen Jing D(Mark) wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Add fm10k device ID list into rte_pci_dev_ids.h. > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 ++++++++++++++++++++++ > 1 files changed, 22 insertions(+), 0 deletions(-) > > diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h > index c922de9..f54800e 100644 > --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h > +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h > @@ -132,6 +132,14 @@ > #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) > #endif > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10K > +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) > +#endif > + > +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF > +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) > +#endif > + I know this isn't the job of this patch series, but I don't really understand why we bother with this pattern for filling out pci id tables. A PMD supports specific hardware, we might as well use the generic RTE_PCI_DEVICE macro in the driver rather than creating a FM10K specific wrapper, only to have to do some ifdef trickery in the rte_cpi_dev_ids file and some include magic to fill it out. I'd suggest that you just use RTE_PCI_DEVICE macro here, and make your own table (keep the specific device id values in the common file. Then we can clean out the macro maggic in a later update. Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 14:19 ` Neil Horman @ 2015-01-31 16:07 ` David Marchand 2015-01-31 16:32 ` Neil Horman 2015-02-02 7:54 ` Chen, Jing D 1 sibling, 1 reply; 147+ messages in thread From: David Marchand @ 2015-01-31 16:07 UTC (permalink / raw) To: Neil Horman; +Cc: dev On Sat, Jan 31, 2015 at 3:19 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > On Fri, Jan 30, 2015 at 01:07:20PM +0800, Chen Jing D(Mark) wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > Add fm10k device ID list into rte_pci_dev_ids.h. > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 > ++++++++++++++++++++++ > > 1 files changed, 22 insertions(+), 0 deletions(-) > > > > diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h > b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > index c922de9..f54800e 100644 > > --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h > > +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > @@ -132,6 +132,14 @@ > > #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) > > #endif > > > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10K > > +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) > > +#endif > > + > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF > > +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) > > +#endif > > + > I know this isn't the job of this patch series, but I don't really > understand > why we bother with this pattern for filling out pci id tables. A PMD > supports > specific hardware, we might as well use the generic RTE_PCI_DEVICE macro > in the > driver rather than creating a FM10K specific wrapper, only to have to do > some > ifdef trickery in the rte_cpi_dev_ids file and some include magic to fill > it > out. > +1 > I'd suggest that you just use RTE_PCI_DEVICE macro here, and make your own > table > (keep the specific device id values in the common file. Then we can clean > out > the macro maggic in a later update. > If we are going that way, my only concern is that this makes it hard to "discover" the pci devices that need to be bound to igb_uio/vfio/whatever kernel driver, and so we must rely either on a global list (like this rte_pci_dev_ids.h) or on scripts with hardcoded pci device ids in them ... In the end, we miss something to have dpdk work automatically like it used to be, before the pci devices ids were stripped out of igb_uio. I can see two solutions : - all pmds export the pci device ids they support (this sounds like modalias :-)) or they register into the eal that exports this information for use by application, but to me the application should not bother with this ... - the pmd handles this automatically (like binding/unbinding on a kernel driver), with a _runtime_ option to enable this behavior (default being "no automatic bind") Comments ? Ideas ? -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 16:07 ` David Marchand @ 2015-01-31 16:32 ` Neil Horman 2015-01-31 16:55 ` David Marchand 0 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-01-31 16:32 UTC (permalink / raw) To: David Marchand; +Cc: dev On Sat, Jan 31, 2015 at 05:07:28PM +0100, David Marchand wrote: > On Sat, Jan 31, 2015 at 3:19 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > > > On Fri, Jan 30, 2015 at 01:07:20PM +0800, Chen Jing D(Mark) wrote: > > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > > > Add fm10k device ID list into rte_pci_dev_ids.h. > > > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > > --- > > > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 > > ++++++++++++++++++++++ > > > 1 files changed, 22 insertions(+), 0 deletions(-) > > > > > > diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h > > b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > > index c922de9..f54800e 100644 > > > --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h > > > +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > > @@ -132,6 +132,14 @@ > > > #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) > > > #endif > > > > > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10K > > > +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) > > > +#endif > > > + > > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF > > > +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) > > > +#endif > > > + > > I know this isn't the job of this patch series, but I don't really > > understand > > why we bother with this pattern for filling out pci id tables. A PMD > > supports > > specific hardware, we might as well use the generic RTE_PCI_DEVICE macro > > in the > > driver rather than creating a FM10K specific wrapper, only to have to do > > some > > ifdef trickery in the rte_cpi_dev_ids file and some include magic to fill > > it > > out. > > > > +1 > > > > I'd suggest that you just use RTE_PCI_DEVICE macro here, and make your own > > table > > (keep the specific device id values in the common file. Then we can clean > > out > > the macro maggic in a later update. > > > > If we are going that way, my only concern is that this makes it hard to > "discover" the pci devices that need to be bound to igb_uio/vfio/whatever > kernel driver, and so we must rely either on a global list (like this > rte_pci_dev_ids.h) or on scripts with hardcoded pci device ids in them ... > > In the end, we miss something to have dpdk work automatically like it used > to be, before the pci devices ids were stripped out of igb_uio. > > I can see two solutions : > - all pmds export the pci device ids they support (this sounds like > modalias :-)) or they register into the eal that exports this information > for use by application, but to me the application should not bother with > this ... > - the pmd handles this automatically (like binding/unbinding on a kernel > driver), with a _runtime_ option to enable this behavior (default being "no > automatic bind") > > Comments ? Ideas ? > I like the modalias idea, as it transports a table for uio/vfio to identify with the binary that needs it, preventing a possible discrepancy in what the core dpdk library identifies as supported devices and what the pmd DSO's actually do support. To implement it we would either provide our own linker script or have to do some other make trickery. The linker script is a bit more labor intensive, but might provide some future benefit (if you see a need for non-standard sections that you would like to suck into a given DSO in the future). Make trickery would be more straightforward, but requires that we add some additional make targets to build out the dynamic table to be readable by outside utilities. Maybe we could use objcopy to add in a separate section after the dso link. Not sure yet Neil > -- > David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 16:32 ` Neil Horman @ 2015-01-31 16:55 ` David Marchand 2015-01-31 18:35 ` Neil Horman 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-01-31 16:55 UTC (permalink / raw) To: Neil Horman; +Cc: dev On Sat, Jan 31, 2015 at 5:32 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > On Sat, Jan 31, 2015 at 05:07:28PM +0100, David Marchand wrote: > > In the end, we miss something to have dpdk work automatically like it > used > > to be, before the pci devices ids were stripped out of igb_uio. > > > > I can see two solutions : > > - all pmds export the pci device ids they support (this sounds like > > modalias :-)) or they register into the eal that exports this information > > for use by application, but to me the application should not bother with > > this ... > > - the pmd handles this automatically (like binding/unbinding on a kernel > > driver), with a _runtime_ option to enable this behavior (default being > "no > > automatic bind") > > > > Comments ? Ideas ? > > > I like the modalias idea, as it transports a table for uio/vfio to > identify with > the binary that needs it, preventing a possible discrepancy in what the > core > dpdk library identifies as supported devices and what the pmd DSO's > actually do > support. > Yes, but if a pmd can do this itself alone, there is no discrepancy either. Thinking aloud ... As long as the pmd does all the magic bindings, the dpdk core does not need to know about it. And if the dpdk core really needs to know about this (I can see no use case here, just want to avoid being blocked later) a dynamic register system would be fine too. > To implement it we would either provide our own linker script or have to > do some > other make trickery. The linker script is a bit more labor intensive, but > might > provide some future benefit (if you see a need for non-standard sections > that > you would like to suck into a given DSO in the future). Make trickery > would be > more straightforward, but requires that we add some additional make > targets to > build out the dynamic table to be readable by outside utilities. Maybe we > could > use objcopy to add in a separate section after the dso link. Not sure yet > Not sure either, I just think we may reinvent the wheel :-) -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 16:55 ` David Marchand @ 2015-01-31 18:35 ` Neil Horman 2015-05-07 11:06 ` David Marchand 0 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-01-31 18:35 UTC (permalink / raw) To: David Marchand; +Cc: dev On Sat, Jan 31, 2015 at 05:55:07PM +0100, David Marchand wrote: > On Sat, Jan 31, 2015 at 5:32 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > > > On Sat, Jan 31, 2015 at 05:07:28PM +0100, David Marchand wrote: > > > In the end, we miss something to have dpdk work automatically like it > > used > > > to be, before the pci devices ids were stripped out of igb_uio. > > > > > > I can see two solutions : > > > - all pmds export the pci device ids they support (this sounds like > > > modalias :-)) or they register into the eal that exports this information > > > for use by application, but to me the application should not bother with > > > this ... > > > - the pmd handles this automatically (like binding/unbinding on a kernel > > > driver), with a _runtime_ option to enable this behavior (default being > > "no > > > automatic bind") > > > > > > Comments ? Ideas ? > > > > > I like the modalias idea, as it transports a table for uio/vfio to > > identify with > > the binary that needs it, preventing a possible discrepancy in what the > > core > > dpdk library identifies as supported devices and what the pmd DSO's > > actually do > > support. > > > > Yes, but if a pmd can do this itself alone, there is no discrepancy either. > Yes, absolutely, but it needs to be done in a way that an external binary can inspect the object independently of its being loaded, and preferably in a non-programatic context (since the uio bind/unbind utilities are separate scripts). The modinfo method involves putting info into a special data section that gets processed as part of the kernel modpost build stage. There, the additional section gets translated into a C file, and built into its own object thats attached to the final binary module. We can so the same thing here if you like We could do something simpler, too. For instance we could add a field to the struct that gets registered as part of the RTE_REGISTER_PMD macro, and export it via a new ethdev library call. That would be very straightforward, but the implication there is that you would need a programatic method to interrogate that information (a binary to call into the dpdk), which is not part of how the bind/unbind scripts work today. > Thinking aloud ... > As long as the pmd does all the magic bindings, the dpdk core does not need > to know about it. Yes, I'm not suggesting anything other than the pmd be responsible for codifying its binding information. Its how that information is exported to other utilities that potentially increases the complexity of this operation. > And if the dpdk core really needs to know about this (I can see no use case > here, just want to avoid being blocked later) a dynamic register system > would be fine too. Sure, I don't see a problem with that. If we properly macro-wrap this, we can likely add a dynamic registration mechanism without having to change the pmds later Neil > > > > > To implement it we would either provide our own linker script or have to > > do some > > other make trickery. The linker script is a bit more labor intensive, but > > might > > provide some future benefit (if you see a need for non-standard sections > > that > > you would like to suck into a given DSO in the future). Make trickery > > would be > > more straightforward, but requires that we add some additional make > > targets to > > build out the dynamic table to be readable by outside utilities. Maybe we > > could > > use objcopy to add in a separate section after the dso link. Not sure yet > > > > Not sure either, I just think we may reinvent the wheel :-) > > > -- > David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 18:35 ` Neil Horman @ 2015-05-07 11:06 ` David Marchand 2015-05-07 13:36 ` Neil Horman 0 siblings, 1 reply; 147+ messages in thread From: David Marchand @ 2015-05-07 11:06 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hello Neil, Reviving this old thread. On Sat, Jan 31, 2015 at 7:35 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > On Sat, Jan 31, 2015 at 05:55:07PM +0100, David Marchand wrote: > > On Sat, Jan 31, 2015 at 5:32 PM, Neil Horman <nhorman@tuxdriver.com> > wrote: > > > > > On Sat, Jan 31, 2015 at 05:07:28PM +0100, David Marchand wrote: > > > > In the end, we miss something to have dpdk work automatically like it > > > used > > > > to be, before the pci devices ids were stripped out of igb_uio. > > > > > > > > I can see two solutions : > > > > - all pmds export the pci device ids they support (this sounds like > > > > modalias :-)) or they register into the eal that exports this > information > > > > for use by application, but to me the application should not bother > with > > > > this ... > > > > - the pmd handles this automatically (like binding/unbinding on a > kernel > > > > driver), with a _runtime_ option to enable this behavior (default > being > > > "no > > > > automatic bind") > > > > > > > > Comments ? Ideas ? > > > > > > > I like the modalias idea, as it transports a table for uio/vfio to > > > identify with > > > the binary that needs it, preventing a possible discrepancy in what the > > > core > > > dpdk library identifies as supported devices and what the pmd DSO's > > > actually do > > > support. > > > > > > > Yes, but if a pmd can do this itself alone, there is no discrepancy > either. > > > Yes, absolutely, but it needs to be done in a way that an external binary > can > inspect the object independently of its being loaded, and preferably in a > non-programatic context (since the uio bind/unbind utilities are separate > scripts). > > The modinfo method involves putting info into a special data section that > gets > processed as part of the kernel modpost build stage. There, the additional > section gets translated into a C file, and built into its own object thats > attached to the final binary module. We can so the same thing here if you > like > > We could do something simpler, too. For instance we could add a field to > the > struct that gets registered as part of the RTE_REGISTER_PMD macro, and > export it > via a new ethdev library call. That would be very straightforward, but the > implication there is that you would need a programatic method to > interrogate > that information (a binary to call into the dpdk), which is not part of > how the > bind/unbind scripts work today. > > > Thinking aloud ... > > As long as the pmd does all the magic bindings, the dpdk core does not > need > > to know about it. > Yes, I'm not suggesting anything other than the pmd be responsible for > codifying > its binding information. Its how that information is exported to other > utilities that potentially increases the complexity of this operation. > > > And if the dpdk core really needs to know about this (I can see no use > case > > here, just want to avoid being blocked later) a dynamic register system > > would be fine too. > Sure, I don't see a problem with that. If we properly macro-wrap this, we > can > likely add a dynamic registration mechanism without having to change the > pmds > later > Did you work on this ? I have been playing with this, I will most likely propose patches soon. My preferred approach is to dedicate an elf section for this ("à la" kernel .modinfo). I tried to reuse modinfo, but the problem is that kmod implementation is checking the filename extension against .ko and .ko.gz. I find it a bit too bad to have to rewrite this kind of tool just for dpdk ... but on the other hand we would need something for bsd as well or we give a shell script that rely on readelf to retrieve theis section. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-05-07 11:06 ` David Marchand @ 2015-05-07 13:36 ` Neil Horman 2015-05-07 13:39 ` David Marchand 0 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-05-07 13:36 UTC (permalink / raw) To: David Marchand; +Cc: dev On Thu, May 07, 2015 at 01:06:02PM +0200, David Marchand wrote: > Hello Neil, > > Reviving this old thread. > > On Sat, Jan 31, 2015 at 7:35 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > > > On Sat, Jan 31, 2015 at 05:55:07PM +0100, David Marchand wrote: > > > On Sat, Jan 31, 2015 at 5:32 PM, Neil Horman <nhorman@tuxdriver.com> > > wrote: > > > > > > > On Sat, Jan 31, 2015 at 05:07:28PM +0100, David Marchand wrote: > > > > > In the end, we miss something to have dpdk work automatically like it > > > > used > > > > > to be, before the pci devices ids were stripped out of igb_uio. > > > > > > > > > > I can see two solutions : > > > > > - all pmds export the pci device ids they support (this sounds like > > > > > modalias :-)) or they register into the eal that exports this > > information > > > > > for use by application, but to me the application should not bother > > with > > > > > this ... > > > > > - the pmd handles this automatically (like binding/unbinding on a > > kernel > > > > > driver), with a _runtime_ option to enable this behavior (default > > being > > > > "no > > > > > automatic bind") > > > > > > > > > > Comments ? Ideas ? > > > > > > > > > I like the modalias idea, as it transports a table for uio/vfio to > > > > identify with > > > > the binary that needs it, preventing a possible discrepancy in what the > > > > core > > > > dpdk library identifies as supported devices and what the pmd DSO's > > > > actually do > > > > support. > > > > > > > > > > Yes, but if a pmd can do this itself alone, there is no discrepancy > > either. > > > > > Yes, absolutely, but it needs to be done in a way that an external binary > > can > > inspect the object independently of its being loaded, and preferably in a > > non-programatic context (since the uio bind/unbind utilities are separate > > scripts). > > > > The modinfo method involves putting info into a special data section that > > gets > > processed as part of the kernel modpost build stage. There, the additional > > section gets translated into a C file, and built into its own object thats > > attached to the final binary module. We can so the same thing here if you > > like > > > > We could do something simpler, too. For instance we could add a field to > > the > > struct that gets registered as part of the RTE_REGISTER_PMD macro, and > > export it > > via a new ethdev library call. That would be very straightforward, but the > > implication there is that you would need a programatic method to > > interrogate > > that information (a binary to call into the dpdk), which is not part of > > how the > > bind/unbind scripts work today. > > > > > Thinking aloud ... > > > As long as the pmd does all the magic bindings, the dpdk core does not > > need > > > to know about it. > > Yes, I'm not suggesting anything other than the pmd be responsible for > > codifying > > its binding information. Its how that information is exported to other > > utilities that potentially increases the complexity of this operation. > > > > > And if the dpdk core really needs to know about this (I can see no use > > case > > > here, just want to avoid being blocked later) a dynamic register system > > > would be fine too. > > Sure, I don't see a problem with that. If we properly macro-wrap this, we > > can > > likely add a dynamic registration mechanism without having to change the > > pmds > > later > > > > Did you work on this ? > No, I had assumed from our previous discussion that you were. > > I have been playing with this, I will most likely propose patches soon. > > My preferred approach is to dedicate an elf section for this ("à la" kernel > .modinfo). Yes, this makes sense. > I tried to reuse modinfo, but the problem is that kmod implementation is > checking the filename extension against .ko and .ko.gz. > Well, you can alter modinfo so that it looks at .so files if you like, but thats not the only tool you can use. Truthfully you can just use objdump if you like. > I find it a bit too bad to have to rewrite this kind of tool just for dpdk > ... but on the other hand we would need something for bsd as well or we > give a shell script that rely on readelf to retrieve theis section. > See above, try objdump -j=.modinfo -S /path/to/kernel/module. objdump doesn't care about file extensions, as long as its ELF. With that you can: 1) Dump out any section contents you like 2) strip away the application top end, and just use libbfd to get at the elf contents if you like. Neil > > -- > David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-05-07 13:36 ` Neil Horman @ 2015-05-07 13:39 ` David Marchand 0 siblings, 0 replies; 147+ messages in thread From: David Marchand @ 2015-05-07 13:39 UTC (permalink / raw) To: Neil Horman; +Cc: dev On Thu, May 7, 2015 at 3:36 PM, Neil Horman <nhorman@tuxdriver.com> wrote: > > I tried to reuse modinfo, but the problem is that kmod implementation is > > checking the filename extension against .ko and .ko.gz. > > > Well, you can alter modinfo so that it looks at .so files if you like, but > thats > not the only tool you can use. Truthfully you can just use objdump if you > like. > > > I find it a bit too bad to have to rewrite this kind of tool just for > dpdk > > ... but on the other hand we would need something for bsd as well or we > > give a shell script that rely on readelf to retrieve theis section. > > > See above, try objdump -j=.modinfo -S /path/to/kernel/module. objdump > doesn't > care about file extensions, as long as its ELF. With that you can: > > 1) Dump out any section contents you like > 2) strip away the application top end, and just use libbfd to get at the > elf > contents if you like. > Yes, I reached the same conclusion. Ok, I will see what I can do. Thanks. -- David Marchand ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id 2015-01-31 14:19 ` Neil Horman 2015-01-31 16:07 ` David Marchand @ 2015-02-02 7:54 ` Chen, Jing D 1 sibling, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-02 7:54 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hi Neil, > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Saturday, January 31, 2015 10:20 PM > To: Chen, Jing D > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id > > On Fri, Jan 30, 2015 at 01:07:20PM +0800, Chen Jing D(Mark) wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > Add fm10k device ID list into rte_pci_dev_ids.h. > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 > ++++++++++++++++++++++ > > 1 files changed, 22 insertions(+), 0 deletions(-) > > > > diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h > b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > index c922de9..f54800e 100644 > > --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h > > +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h > > @@ -132,6 +132,14 @@ > > #define RTE_PCI_DEV_ID_DECL_VMXNET3(vend, dev) > > #endif > > > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10K > > +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) > > +#endif > > + > > +#ifndef RTE_PCI_DEV_ID_DECL_FM10KVF > > +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) > > +#endif > > + > I know this isn't the job of this patch series, but I don't really understand > why we bother with this pattern for filling out pci id tables. A PMD supports > specific hardware, we might as well use the generic RTE_PCI_DEVICE macro > in the > driver rather than creating a FM10K specific wrapper, only to have to do > some > ifdef trickery in the rte_cpi_dev_ids file and some include magic to fill it > out. > > I'd suggest that you just use RTE_PCI_DEVICE macro here, and make your > own table > (keep the specific device id values in the common file. Then we can clean out > the macro maggic in a later update. I partially agree with you. Maybe a better solution is to use the mechanism that applied in kernel to register PCI driver. Driver maintains a device list that it can manage and provide a hook function to detect if new device can be managed by the driver. Then, DPDK core library needn't worry about the long device list (Maybe script that unbind/bind device needs such info, it's another story). But I'd like to keep current implementation unchanged since final decision is not made yet. If new mechanism is introduced, I would like to update to adapt to the new changes. > Neil Best Regards, Mark ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 05/18] fm10k: Add code to register fm10k pmd PF driver 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (3 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 06/18] fm10k: add reta update/requery functions Chen Jing D(Mark) ` (13 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to scan and initialize fm10k PF device. 2. Add implementation to register fm10k pmd PF driver. 3. Add 3 functions fm10k_dev_configure, fm10k_stats_get and fm10k_stats_get. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 339 +++++++++++++++++++++++++++++++++++ 1 files changed, 339 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index e69de29..400d841 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -0,0 +1,339 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_ethdev.h> +#include <rte_malloc.h> +#include <rte_memzone.h> +#include <rte_string_fns.h> +#include <rte_dev.h> +#include <rte_spinlock.h> + +#include "fm10k.h" +#include "SHARED/fm10k_api.h" + +/* Default delay to acquire mailbox lock */ +#define FM10K_MBXLOCK_DELAY_US 20 + +static void +fm10k_mbx_initlock(struct fm10k_hw *hw) +{ + rte_spinlock_init(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static void +fm10k_mbx_lock(struct fm10k_hw *hw) +{ + while (!rte_spinlock_trylock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back))) + rte_delay_us(FM10K_MBXLOCK_DELAY_US); +} + +static void +fm10k_mbx_unlock(struct fm10k_hw *hw) +{ + rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); +} + +static int +fm10k_dev_configure(struct rte_eth_dev *dev) +{ + PMD_FUNC_TRACE(); + + if (dev->data->dev_conf.rxmode.hw_strip_crc == 0) + PMD_LOG(WARNING, "fm10k always strip CRC"); + + return 0; +} + +static void +fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint64_t ipackets, opackets, ibytes, obytes; + struct fm10k_hw *hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + int i; + PMD_FUNC_TRACE(); + + fm10k_update_hw_stats(hw, hw_stats); + + ipackets = opackets = ibytes = obytes = 0; + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < FM10K_MAX_QUEUES_PF); ++i) { + stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; + stats->q_opackets[i] = hw_stats->q[i].tx_packets.count; + stats->q_ibytes[i] = hw_stats->q[i].rx_bytes.count; + stats->q_obytes[i] = hw_stats->q[i].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + stats->ipackets = ipackets; + stats->opackets = opackets; + stats->ibytes = ibytes; + stats->obytes = obytes; +} + +static void +fm10k_stats_reset(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw_stats *hw_stats = + FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); + PMD_FUNC_TRACE(); + + memset(hw_stats, 0, sizeof(*hw_stats)); + fm10k_rebind_hw_stats(hw, hw_stats); +} + +/* Mailbox message handler in VF */ +static const struct fm10k_msg_data fm10k_msgdata_vf[] = { + FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), + FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf), + FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +/* Mailbox message handler in PF */ +static const struct fm10k_msg_data fm10k_msgdata_pf[] = { + FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf), + FM10K_PF_MSG_LPORT_MAP_HANDLER(fm10k_msg_lport_map_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf), + FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf), + FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_msg_update_pvid_pf), + FM10K_TLV_MSG_ERROR_HANDLER(fm10k_tlv_msg_error), +}; + +static int +fm10k_setup_mbx_service(struct fm10k_hw *hw) +{ + int err; + + /* Initialize mailbox lock */ + fm10k_mbx_initlock(hw); + + /* Replace default message handler with new ones */ + if (hw->mac.type == fm10k_mac_pf) + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_pf); + else + err = hw->mbx.ops.register_handlers(&hw->mbx, fm10k_msgdata_vf); + + if (err) { + PMD_LOG(ERR, "Failed to register mailbox handler.err:%d", err); + return err; + } + /* Connect to SM for PF device or PF for VF device */ + return hw->mbx.ops.connect(hw, &hw->mbx); +} + +static struct eth_dev_ops fm10k_eth_dev_ops = { + .dev_configure = fm10k_dev_configure, + .stats_get = fm10k_stats_get, + .stats_reset = fm10k_stats_reset, +}; + +static int +eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, + struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int diag; + PMD_FUNC_TRACE(); + + dev->dev_ops = &fm10k_eth_dev_ops; + + /* only initialize in the primary process */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + /* Vendor and Device ID need to be set before init of shared code */ + memset(hw, 0, sizeof(*hw)); + hw->device_id = dev->pci_dev->id.device_id; + hw->vendor_id = dev->pci_dev->id.vendor_id; + hw->subsystem_device_id = dev->pci_dev->id.subsystem_device_id; + hw->subsystem_vendor_id = dev->pci_dev->id.subsystem_vendor_id; + hw->revision_id = 0; + hw->hw_addr = (void *)dev->pci_dev->mem_resource[0].addr; + if (hw->hw_addr == NULL) { + PMD_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices.\n"); + return -EIO; + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; + + /* Initialize the shared code */ + diag = fm10k_init_shared_code(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Shared code init failed: %d", diag); + return -EIO; + } + + /* + * Inialize bus info. Normally we would call fm10k_get_bus_info(), but + * there is no way to get link status without reading BAR4. Until this + * works, assume we have maximum bandwidth. + * @todo - fix bus info + */ + hw->bus_caps.speed = fm10k_bus_speed_8000; + hw->bus_caps.width = fm10k_bus_width_pcie_x8; + hw->bus_caps.payload = fm10k_bus_payload_512; + hw->bus.speed = fm10k_bus_speed_8000; + hw->bus.width = fm10k_bus_width_pcie_x8; + hw->bus.payload = fm10k_bus_payload_256; + + /* Initialize the hw */ + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + /* Initialize MAC address(es) */ + dev->data->mac_addrs = rte_zmalloc("fm10k", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + PMD_LOG(ERR, "Cannot allocate memory for MAC addresses"); + return -ENOMEM; + } + + diag = fm10k_read_mac_addr(hw); + if (diag != FM10K_SUCCESS) { + /* + * TODO: remove special handling on VF. Need shared code to + * fix first. + */ + if (hw->mac.type == fm10k_mac_pf) { + PMD_LOG(ERR, "Read MAC addr failed: %d", diag); + return -EIO; + } else { + /* Generate a random addr */ + eth_random_addr(hw->mac.addr); + memcpy(hw->mac.perm_addr, hw->mac.addr, ETH_ALEN); + } + } + + ether_addr_copy((const struct ether_addr *)hw->mac.addr, + &dev->data->mac_addrs[0]); + + /* Reset the hw statistics */ + fm10k_stats_reset(dev); + + /* Reset the hw */ + diag = fm10k_reset_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Hardware reset failed: %d", diag); + return -EIO; + } + + /* Setup mailbox service */ + diag = fm10k_setup_mbx_service(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Failed to setup mailbox: %d", diag); + return -EIO; + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, 0, 0, 1); + + /* Update default vlan */ + hw->mac.ops.update_vlan(hw, hw->mac.default_vid, 0, true); + + /* + * Add default mac/vlan filter. glort is assigned by SM for PF, while is + * unused for VF. PF will assign correct glort for VF. + */ + hw->mac.ops.update_uc_addr(hw, hw->mac.dglort_map, hw->mac.addr, + hw->mac.default_vid, 1, 0); + + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_MULTI); + + fm10k_mbx_unlock(hw); + + return 0; +} + +/* + * The set of PCI devices this driver supports. This driver will enable both PF + * and SRIOV-VF devices. + */ +static struct rte_pci_id pci_id_fm10k_map[] = { +#define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#include "rte_pci_dev_ids.h" + { .vendor_id = 0, /* sentinel */ }, +}; + +static struct eth_driver rte_pmd_fm10k = { + { + .name = "rte_pmd_fm10k", + .id_table = pci_id_fm10k_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = eth_fm10k_dev_init, + .dev_private_size = sizeof(struct fm10k_adapter), +}; + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI FM10K devices. + */ +static int +rte_pmd_fm10k_init(__rte_unused const char *name, + __rte_unused const char *params) +{ + PMD_FUNC_TRACE(); + rte_eth_driver_register(&rte_pmd_fm10k); + return 0; +} + +static struct rte_driver rte_fm10k_driver = { + .type = PMD_PDEV, + .init = rte_pmd_fm10k_init, +}; + +PMD_REGISTER_DRIVER(rte_fm10k_driver); -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 06/18] fm10k: add reta update/requery functions 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (4 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 05/18] fm10k: Add code to register fm10k pmd PF driver Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 07/18] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) ` (12 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_reta_update and fm10k_reta_query functions. 2. Add fm10k_link_update and fm10k_dev_infos_get functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 161 +++++++++++++++++++++++++++++++++++ 1 files changed, 161 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 400d841..991d6ee 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -44,6 +44,10 @@ /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 +/* Number of chars per uint32 type */ +#define CHARS_PER_UINT32 (sizeof(uint32_t)) +#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -74,6 +78,22 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +fm10k_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + PMD_FUNC_TRACE(); + + /* The host-interface link is always up. The speed is ~50Gbps per Gen3 + * x8 PCIe interface. For now, we leave the speed undefined since there + * is no 50Gbps Ethernet. */ + dev->data->dev_link.link_speed = 0; + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + dev->data->dev_link.link_status = 1; + + return 0; +} + static void fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -117,6 +137,143 @@ fm10k_stats_reset(struct rte_eth_dev *dev) fm10k_rebind_hw_stats(hw, hw_stats); } +static void +fm10k_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + PMD_FUNC_TRACE(); + + dev_info->min_rx_bufsize = FM10K_MIN_RX_BUF_SIZE; + dev_info->max_rx_pktlen = FM10K_MAX_PKT_SIZE; + dev_info->max_rx_queues = hw->mac.max_queues; + dev_info->max_tx_queues = hw->mac.max_queues; + dev_info->max_mac_addrs = 1; + dev_info->max_hash_mac_addrs = 0; + dev_info->max_vfs = FM10K_MAX_VF_NUM; + dev_info->max_vmdq_pools = ETH_64_POOLS; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = 0; + dev_info->reta_size = FM10K_MAX_RSS_INDICES; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = FM10K_DEFAULT_RX_PTHRESH, + .hthresh = FM10K_DEFAULT_RX_HTHRESH, + .wthresh = FM10K_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(0), + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = FM10K_DEFAULT_TX_PTHRESH, + .hthresh = FM10K_DEFAULT_TX_HTHRESH, + .wthresh = FM10K_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(0), + .tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(0), + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + +} + +static int +fm10k_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_FUNC_TRACE(); + + if (reta_size > FM10K_MAX_RSS_INDICES) { + PMD_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)\n", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Update Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = 0; + if (mask != BIT_MASK_PER_UINT32) + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) { + if (mask != 0xF) + reta &= ~(UINT8_MAX << CHAR_BIT * j); + reta |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + } + } + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), reta); + } + + return 0; +} + +static int +fm10k_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t i, j, idx, shift; + uint8_t mask; + uint32_t reta; + + PMD_FUNC_TRACE(); + + if (reta_size < FM10K_MAX_RSS_INDICES) { + PMD_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)\n", reta_size, FM10K_MAX_RSS_INDICES); + return -EINVAL; + } + + /* + * Read Redirection Table RETA[n], n=0..31. The redirection table has + * 128-entries in 32 registers + */ + for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + BIT_MASK_PER_UINT32); + if (mask == 0) + continue; + + reta = FM10K_READ_REG(hw, FM10K_RETA(0, i >> 2)); + for (j = 0; j < CHARS_PER_UINT32; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((reta >> + CHAR_BIT * j) & UINT8_MAX); + } + } + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -160,8 +317,12 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .link_update = fm10k_link_update, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, + .dev_infos_get = fm10k_dev_infos_get, + .reta_update = fm10k_reta_update, + .reta_query = fm10k_reta_query, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 07/18] fm10k: add rx_queue_setup/release function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (5 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 06/18] fm10k: add reta update/requery functions Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 08/18] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) ` (11 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_rx_queue_setup and fm10k_rx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 253 +++++++++++++++++++++++++++++++++++ 1 files changed, 253 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 991d6ee..32388cd 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -41,6 +41,7 @@ #include "fm10k.h" #include "SHARED/fm10k_api.h" +#define FM10K_RX_BUFF_ALIGN 512 /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -67,6 +68,46 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) rte_spinlock_unlock(FM10K_DEV_PRIVATE_TO_MBXLOCK(hw->back)); } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device. + */ +static inline void +rx_queue_clean(struct fm10k_rx_queue *q) +{ + union fm10k_rx_desc zero = {.q = {0, 0, 0, 0} }; + uint32_t i; + PMD_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +rx_queue_free(struct fm10k_rx_queue *q) +{ + PMD_FUNC_TRACE(); + if (q) { + PMD_LOG(DEBUG, "Freeing rx queue %p", q); + rx_queue_clean(q); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -183,6 +224,216 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static inline int +check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) +{ + if ((request < min) || (request > max) || ((request % mult) != 0)) + return -1; + else + return 0; +} + +/* + * Create a memzone for hardware descriptor rings. Malloc cannot be used since + * the physical address is required. If the memzone is already created, then + * this function returns a pointer to the existing memzone. + */ +static inline const struct rte_memzone * +allocate_hw_ring(const char *driver_name, const char *ring_name, + uint8_t port_id, uint16_t queue_id, int socket_id, + uint32_t size, uint32_t align) +{ + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(name, sizeof(name), "%s_%s_%d_%d_%d", + driver_name, ring_name, port_id, queue_id, socket_id); + + /* return the memzone if it already exists */ + mz = rte_memzone_lookup(name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(name, size, socket_id, 0, align, + RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(name, size, socket_id, 0, align); +#endif +} + +static inline int +check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request) +{ + if ((request < min) || (request > max) || ((div % request) != 0)) + return -1; + else + return 0; +} + +static inline int +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf) +{ + uint16_t rx_free_thresh; + + if (conf->rx_free_thresh == 0) + rx_free_thresh = FM10K_RX_FREE_THRESH_DEFAULT(q); + else + rx_free_thresh = conf->rx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_DIV(q), + rx_free_thresh)) { + PMD_LOG(ERR, "rx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + rx_free_thresh, FM10K_RX_FREE_THRESH_MAX(q), + FM10K_RX_FREE_THRESH_MIN(q), + FM10K_RX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->alloc_thresh = rx_free_thresh; + q->drop_en = conf->rx_drop_en; + q->rx_deferred_start = conf->rx_deferred_start; + + return 0; +} + +/* + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * As such, the driver may need to adjust the DMA address within the + * buffer by up to 512B. The mempool element size is checked here + * to make sure a maximally sized Ethernet frame can still be wholly + * contained within the buffer after 512B alignment. + * + * return 1 if the element size is valid, otherwise return 0. + */ +static int +mempool_element_size_valid(struct rte_mempool *mp) +{ + uint32_t min_size; + + /* elt_size includes mbuf header and headroom */ + min_size = mp->elt_size - sizeof(struct rte_mbuf) - + RTE_PKTMBUF_HEADROOM; + + /* account for up to 512B of alignment */ + min_size -= FM10K_RX_BUFF_ALIGN; + + /* sanity check for overflow */ + if (min_size > mp->elt_size) + return 0; + + if (min_size < ETHER_MAX_VLAN_FRAME_LEN) + return 0; + + /* size is valid */ + return 1; +} + +static int +fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, struct rte_mempool *mp) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_rx_queue *q; + const struct rte_memzone *mz; + PMD_FUNC_TRACE(); + + /* make sure the mempool element size can account for alignment. Use + * RTE_LOG directly to make sure this error is seen. */ + if (!mempool_element_size_valid(mp)) { + RTE_LOG(ERR, PMD, "Error:Mempool element size is too small\n"); + return (-EINVAL); + } + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_RX_DESC, FM10K_MAX_RX_DESC, + FM10K_MULT_RX_DESC, nb_desc)) { + PMD_LOG(ERR, "Number of Rx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_RX_DESC, FM10K_MIN_RX_DESC, + FM10K_MULT_RX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[queue_id] != NULL) { + rx_queue_free(dev->data->rx_queues[queue_id]); + dev->data->rx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->mp = mp; + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + if (handle_rxconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "rx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_RX_RING_SZ, FM10K_ALIGN_RX_DESC); + if (mz == NULL) { + PMD_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + dev->data->rx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_rx_queue_release(void *queue) +{ + PMD_FUNC_TRACE(); + rx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -321,6 +572,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_setup = fm10k_rx_queue_setup, + .rx_queue_release = fm10k_rx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 08/18] fm10k: add tx_queue_setup/release function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (6 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 07/18] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 09/18] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) ` (10 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_tx_queue_setup and fm10k_tx_queue_release functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 203 +++++++++++++++++++++++++++++++++++ 1 files changed, 203 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 32388cd..2decf30 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -108,6 +108,48 @@ rx_queue_free(struct fm10k_rx_queue *q) } } +/* + * clean queue, descriptor rings, free software buffers used when stopping + * device + */ +static inline void +tx_queue_clean(struct fm10k_tx_queue *q) +{ + struct fm10k_tx_desc zero = {0, 0, 0, 0, 0, 0}; + uint32_t i; + PMD_FUNC_TRACE(); + + /* zero descriptor rings */ + for (i = 0; i < q->nb_desc; ++i) + q->hw_ring[i] = zero; + + /* free software buffers */ + for (i = 0; i < q->nb_desc; ++i) { + if (q->sw_ring[i]) { + rte_pktmbuf_free_seg(q->sw_ring[i]); + q->sw_ring[i] = NULL; + } + } +} + +/* + * free all queue memory used when releasing the queue (i.e. configure) + */ +static inline void +tx_queue_free(struct fm10k_tx_queue *q) +{ + PMD_FUNC_TRACE(); + if (q) { + PMD_LOG(DEBUG, "Freeing tx queue %p", q); + tx_queue_clean(q); + if (q->rs_tracker.list) + rte_free(q->rs_tracker.list); + if (q->sw_ring) + rte_free(q->sw_ring); + rte_free(q); + } +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -434,6 +476,165 @@ fm10k_rx_queue_release(void *queue) rx_queue_free(queue); } +static inline int +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf) +{ + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; + + /* constraint MACROs require that tx_free_thresh is configured + * before tx_rs_thresh */ + if (conf->tx_free_thresh == 0) + tx_free_thresh = FM10K_TX_FREE_THRESH_DEFAULT(q); + else + tx_free_thresh = conf->tx_free_thresh; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_DIV(q), + tx_free_thresh)) { + PMD_LOG(ERR, "tx_free_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_free_thresh, FM10K_TX_FREE_THRESH_MAX(q), + FM10K_TX_FREE_THRESH_MIN(q), + FM10K_TX_FREE_THRESH_DIV(q)); + return (-EINVAL); + } + + q->free_thresh = tx_free_thresh; + + if (conf->tx_rs_thresh == 0) + tx_rs_thresh = FM10K_TX_RS_THRESH_DEFAULT(q); + else + tx_rs_thresh = conf->tx_rs_thresh; + + q->tx_deferred_start = conf->tx_deferred_start; + + /* make sure the requested threshold satisfies the constraints */ + if (check_thresh(FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_DIV(q), + tx_rs_thresh)) { + PMD_LOG(ERR, "tx_rs_thresh (%u) must be " + "less than or equal to %u, " + "greater than or equal to %u, " + "and a divisor of %u", + tx_rs_thresh, FM10K_TX_RS_THRESH_MAX(q), + FM10K_TX_RS_THRESH_MIN(q), + FM10K_TX_RS_THRESH_DIV(q)); + return (-EINVAL); + } + + q->rs_thresh = tx_rs_thresh; + + return 0; +} + +static int +fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_tx_queue *q; + const struct rte_memzone *mz; + PMD_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (check_nb_desc(FM10K_MIN_TX_DESC, FM10K_MAX_TX_DESC, + FM10K_MULT_TX_DESC, nb_desc)) { + PMD_LOG(ERR, "Number of Tx descriptors (%u) must be " + "less than or equal to %lu, " + "greater than or equal to %u, " + "and a multiple of %u", + nb_desc, FM10K_MAX_TX_DESC, FM10K_MIN_TX_DESC, + FM10K_MULT_TX_DESC); + return (-EINVAL); + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->tx_queues[queue_id] != NULL) { + tx_queue_free(dev->data->tx_queues[queue_id]); + dev->data->tx_queues[queue_id] = NULL; + } + + /* allocate memory for the queue structure */ + q = rte_zmalloc_socket("fm10k", sizeof(*q), RTE_CACHE_LINE_SIZE, + socket_id); + if (q == NULL) { + PMD_LOG(ERR, "Cannot allocate queue structure"); + return (-ENOMEM); + } + + /* setup queue */ + q->nb_desc = nb_desc; + q->port_id = dev->data->port_id; + q->queue_id = queue_id; + q->tail_ptr = (volatile uint32_t *) + &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + if (handle_txconf(q, conf)) + return (-EINVAL); + + /* allocate memory for the software ring */ + q->sw_ring = rte_zmalloc_socket("fm10k sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->sw_ring == NULL) { + PMD_LOG(ERR, "Cannot allocate software ring"); + rte_free(q); + return (-ENOMEM); + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz = allocate_hw_ring(dev->driver->pci_drv.name, "tx_ring", + dev->data->port_id, queue_id, socket_id, + FM10K_MAX_TX_RING_SZ, FM10K_ALIGN_TX_DESC); + if (mz == NULL) { + PMD_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + q->hw_ring = mz->addr; + q->hw_ring_phys_addr = mz->phys_addr; + + /* + * allocate memory for the RS bit tracker. Enough slots to hold the + * descriptor index for each RS bit needing to be set are required. + */ + q->rs_tracker.list = rte_zmalloc_socket("fm10k rs tracker", + ((nb_desc + 1) / q->rs_thresh) * + sizeof(uint16_t), + RTE_CACHE_LINE_SIZE, socket_id); + if (q->rs_tracker.list == NULL) { + PMD_LOG(ERR, "Cannot allocate RS bit tracker"); + rte_free(q->sw_ring); + rte_free(q); + return (-ENOMEM); + } + + dev->data->tx_queues[queue_id] = q; + return 0; +} + +static void +fm10k_tx_queue_release(void *queue) +{ + PMD_FUNC_TRACE(); + tx_queue_free(queue); +} + static int fm10k_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -574,6 +775,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_infos_get = fm10k_dev_infos_get, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, + .tx_queue_setup = fm10k_tx_queue_setup, + .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 09/18] fm10k: add RX/TX single queue start/stop function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (7 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 08/18] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions Chen Jing D(Mark) ` (9 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 4 functions fm10k_dev_rx_queue_start, fm10k_dev_rx_queue_stop, fm10k_dev_tx_queue_start, and fm10k_dev_tx_queue_stop. 2. verify Rx packet buffer alignment is valid. Hardware requires specific alignment for Rx packet buffers. At least one of the following two conditions must be satisfied. 1. Address is 512B aligned 2. Address is 8B aligned and buffer does not cross 4K boundary. Alignment is checked by the driver when the Rx queue is reset. It is assumed that if an entire descriptor ring can be filled with buffers containing valid alignment, then all buffers in that mempool have valid address alignment. It is the responsibility of the user to ensure all buffers have valid alignment, as it is the user who creates the mempool. It is assumed the buffer needs only to store a maximum size Ethernet frame. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 59 ++++++++++ lib/librte_pmd_fm10k/fm10k_ethdev.c | 214 +++++++++++++++++++++++++++++++++++ 2 files changed, 273 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 9b2d3da..6e9effa 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -221,4 +221,63 @@ static inline uint16_t fifo_remove(struct fifo *fifo) fifo->tail = fifo->list; return val; } + +static inline void +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +{ + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->nb_segs = 1; + + /* enforce 512B alignment on default Rx virtual addresses */ + mb->data_off = (uint16_t)(RTE_PTR_ALIGN((char *)mb->buf_addr + + RTE_PKTMBUF_HEADROOM, FM10K_RX_DATABUF_ALIGN) + - (char *)mb->buf_addr); + mb->port = in_port; +} + +/* + * Verify Rx packet buffer alignment is valid. + * + * Hardware requires specific alignment for Rx packet buffers. At + * least one of the following two conditions must be satisfied. + * 1. Address is 512B aligned + * 2. Address is 8B aligned and buffer does not cross 4K boundary. + * + * Return 1 if buffer alignment satisfies at least one condition, + * otherwise return 0. + * + * Note: Alignment is checked by the driver when the Rx queue is reset. It + * is assumed that if an entire descriptor ring can be filled with + * buffers containing valid alignment, then all buffers in that mempool + * have valid address alignment. It is the responsibility of the user + * to ensure all buffers have valid alignment, as it is the user who + * creates the mempool. + * Note: It is assumed the buffer needs only to store a maximum size Ethernet + * frame. + */ +static inline int +fm10k_addr_alignment_valid(struct rte_mbuf *mb) +{ + uint64_t addr = MBUF_DMA_ADDR_DEFAULT(mb); + uint64_t boundary1, boundary2; + + /* 512B aligned? */ + if (RTE_ALIGN(addr, 512) == addr) + return 1; + + /* 8B aligned, and max Ethernet frame would not cross a 4KB boundary? */ + if (RTE_ALIGN(addr, 8) == addr) { + boundary1 = RTE_ALIGN_FLOOR(addr, 4096); + boundary2 = RTE_ALIGN_FLOOR(addr + ETHER_MAX_VLAN_FRAME_LEN, + 4096); + if (boundary1 == boundary2) + return 1; + } + + /* use RTE_LOG directly to make sure this error is seen */ + RTE_LOG(ERR, PMD, "%s(): Error: Invalid buffer alignment\n", __func__); + + return 0; +} #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 2decf30..b4b49cd 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -69,6 +69,43 @@ fm10k_mbx_unlock(struct fm10k_hw *hw) } /* + * reset queue to initial state, allocate software buffers used when starting + * device. + * return 0 on success + * return -ENOMEM if buffers cannot be allocated + * return -EINVAL if buffers do not satisfy alignment condition + */ +static inline int +rx_queue_reset(struct fm10k_rx_queue *q) +{ + uint64_t dma_addr; + int i, diag; + PMD_FUNC_TRACE(); + + diag = rte_mempool_get_bulk(q->mp, (void **)q->sw_ring, q->nb_desc); + if (diag != 0) + return -ENOMEM; + + for (i = 0; i < q->nb_desc; ++i) { + fm10k_pktmbuf_reset(q->sw_ring[i], q->port_id); + if (!fm10k_addr_alignment_valid(q->sw_ring[i])) { + rte_mempool_put_bulk(q->mp, (void **)q->sw_ring, + q->nb_desc); + return -EINVAL; + } + dma_addr = MBUF_DMA_ADDR_DEFAULT(q->sw_ring[i]); + q->hw_ring[i].q.pkt_addr = dma_addr; + q->hw_ring[i].q.hdr_addr = dma_addr; + } + + q->next_dd = 0; + q->next_alloc = 0; + q->next_trigger = q->alloc_thresh - 1; + FM10K_PCI_REG_WRITE(q->tail_ptr, q->nb_desc - 1); + return 0; +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device. */ @@ -109,6 +146,49 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* + * disable RX queue, wait unitl HW finished necessary flush operation + */ +static inline int +rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(qnum), + reg & ~FM10K_RXQCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(i)); + if (!(reg & FM10K_RXQCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + +/* + * reset queue to initial state, allocate software buffers used when starting + * device + */ +static inline void +tx_queue_reset(struct fm10k_tx_queue *q) +{ + PMD_FUNC_TRACE(); + q->last_free = 0; + q->next_free = 0; + q->nb_used = 0; + q->nb_free = q->nb_desc - 1; + q->free_trigger = q->nb_free - q->free_thresh; + fifo_reset(&q->rs_tracker, (q->nb_desc + 1) / q->rs_thresh); + FM10K_PCI_REG_WRITE(q->tail_ptr, 0); +} + +/* * clean queue, descriptor rings, free software buffers used when stopping * device */ @@ -150,6 +230,32 @@ tx_queue_free(struct fm10k_tx_queue *q) } } +/* + * disable TX queue, wait unitl HW finished necessary flush operation + */ +static inline int +tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) +{ + uint32_t reg, i; + + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(qnum)); + FM10K_WRITE_REG(hw, FM10K_TXDCTL(qnum), + reg & ~FM10K_TXDCTL_ENABLE); + + /* Wait 100us at most */ + for (i = 0; i < FM10K_QUEUE_DISABLE_TIMEOUT; i++) { + rte_delay_us(1); + reg = FM10K_READ_REG(hw, FM10K_TXDCTL(i)); + if (!(reg & FM10K_TXDCTL_ENABLE)) + break; + } + + if (i == FM10K_QUEUE_DISABLE_TIMEOUT) + return -1; + + return 0; +} + static int fm10k_dev_configure(struct rte_eth_dev *dev) { @@ -162,6 +268,110 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int err = -1; + uint32_t reg; + struct fm10k_rx_queue *rxq; + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + err = rx_queue_reset(rxq); + if (err == -ENOMEM) { + PMD_LOG(ERR, "Failed to alloc memory : %d", err); + return err; + } else if (err == -EINVAL) { + PMD_LOG(ERR, "Invalid buffer address alignment : %d", + err); + return err; + } + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled on real + * hardware, but BEFORE the queue is enabled when using the + * emulation platform. Do it in both places for now and remove + * this comment and the following two register writes when the + * emulation platform is no longer being used. + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + + /* Set PF ownership flag for PF devices */ + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + if (hw->mac.type == fm10k_mac_pf) + reg |= FM10K_RXQCTL_PF; + reg |= FM10K_RXQCTL_ENABLE; + /* enable RX queue */ + FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_FLUSH(hw); + + /* Setup the HW Rx Head and Tail Descriptor Pointers + * Note: this must be done AFTER the queue is enabled + */ + FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + } + + return err; +} + +static int +fm10k_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (rx_queue_id < dev->data->nb_rx_queues) { + /* Disable RX queue */ + rx_queue_disable(hw, rx_queue_id); + + /* Free mbuf and clean HW ring */ + rx_queue_clean(dev->data->rx_queues[rx_queue_id]); + } + + return 0; +} + +static int +fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + /** @todo - this should be defined in the shared code */ +#define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 + uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; + int err = 0; + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_reset(dev->data->tx_queues[tx_queue_id]); + + /* reset head and tail pointers */ + FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + + /* enable TX queue */ + FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_TXDCTL_ENABLE | txdctl); + FM10K_WRITE_FLUSH(hw); + } else + err = -1; + + return err; +} + +static int +fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (tx_queue_id < dev->data->nb_tx_queues) { + tx_queue_disable(hw, tx_queue_id); + tx_queue_clean(dev->data->tx_queues[tx_queue_id]); + } + + return 0; +} + +static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -773,6 +983,10 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .dev_infos_get = fm10k_dev_infos_get, + .rx_queue_start = fm10k_dev_rx_queue_start, + .rx_queue_stop = fm10k_dev_rx_queue_stop, + .tx_queue_start = fm10k_dev_tx_queue_start, + .tx_queue_stop = fm10k_dev_tx_queue_stop, .rx_queue_setup = fm10k_rx_queue_setup, .rx_queue_release = fm10k_rx_queue_release, .tx_queue_setup = fm10k_tx_queue_setup, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (8 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 09/18] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-02-04 2:36 ` Qiu, Michael 2015-01-30 5:07 ` [dpdk-dev] [PATCH 11/18] fm10k: add receive and tranmit function Chen Jing D(Mark) ` (8 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add function to initialize single RX queue. 2. Add function to initialize single TX queue. 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 220 +++++++++++++++++++++++++++++++++++ 1 files changed, 220 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b4b49cd..3cf5e25 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -49,6 +49,8 @@ #define CHARS_PER_UINT32 (sizeof(uint32_t)) #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) +static void fm10k_close_mbx_service(struct fm10k_hw *hw); + static void fm10k_mbx_initlock(struct fm10k_hw *hw) { @@ -268,6 +270,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) } static int +fm10k_dev_tx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_tx_queue *txq; + uint64_t base_addr; + uint32_t size; + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* Setup TX queue */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + txq = dev->data->tx_queues[i]; + base_addr = txq->hw_ring_phys_addr; + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); + + /* disable queue to avoid issues while updating state */ + ret = tx_queue_disable(hw, i); + if (ret) { + PMD_LOG(ERR, "failed to disable queue %d\n", i); + return -1; + } + + /* set location and size for descriptor ring */ + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + base_addr & 0x00000000ffffffffULL); + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + } + return 0; +} + +static int +fm10k_dev_rx_init(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, ret; + struct fm10k_rx_queue *rxq; + uint64_t base_addr; + uint32_t size; + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; + uint16_t buf_size; + struct rte_pktmbuf_pool_private *mbp_priv; + + /* Disable RXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(hw, FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + + /* Setup RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + rxq = dev->data->rx_queues[i]; + base_addr = rxq->hw_ring_phys_addr; + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); + + /* disable queue to avoid issues while updating state */ + ret = rx_queue_disable(hw, i); + if (ret) { + PMD_LOG(ERR, "failed to disable queue %d\n", i); + return -1; + } + + /* Setup the Base and Length of the Rx Descriptor Ring */ + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + base_addr & 0x00000000ffffffffULL); + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + base_addr >> (CHAR_BIT * sizeof(uint32_t))); + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + + /* Configure the Rx buffer size for one buff without split */ + mbp_priv = rte_mempool_get_priv(rxq->mp); + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - + RTE_PKTMBUF_HEADROOM); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + + /* Enable drop on empty, it's RO for VF */ + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; + + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); + FM10K_WRITE_FLUSH(hw); + } + + return 0; +} + +static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -371,6 +465,122 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +/* fls = find last set bit = 32 minus the number of leading zeros */ +#ifndef fls +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) +#endif +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) +static int +fm10k_dev_start(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i, diag; + + PMD_FUNC_TRACE(); + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(hw); + if (diag != FM10K_SUCCESS) { + PMD_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + diag = fm10k_dev_tx_init(dev); + if (diag) { + PMD_LOG(ERR, "TX init failed: %d", diag); + return diag; + } + + diag = fm10k_dev_rx_init(dev); + if (diag) { + PMD_LOG(ERR, "RX init failed: %d", diag); + return diag; + } + + if (hw->mac.type == fm10k_mac_pf) { + /* Establish only VSI 0 as valid */ + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); + + /* Configure RSS bits used in RETA table */ + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), + fls(dev->data->nb_rx_queues - 1) << + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); + + /* Invalidate all other GLORT entries */ + for (i = 1; i < FM10K_DGLORT_COUNT; i++) + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), + FM10K_DGLORTMAP_NONE); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct fm10k_rx_queue *rxq; + rxq = dev->data->rx_queues[i]; + + if (rxq->rx_deferred_start) + continue; + diag = fm10k_dev_rx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < i; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct fm10k_tx_queue *txq; + txq = dev->data->tx_queues[i]; + + if (txq->tx_deferred_start) + continue; + diag = fm10k_dev_tx_queue_start(dev, i); + if (diag != 0) { + int j; + for (j = 0; j < dev->data->nb_rx_queues; ++j) + rx_queue_clean(dev->data->rx_queues[j]); + return diag; + } + } + + return 0; +} + +static void +fm10k_dev_stop(struct rte_eth_dev *dev) +{ + int i; + PMD_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + fm10k_dev_tx_queue_stop(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + fm10k_dev_rx_queue_stop(dev, i); +} + +static void +fm10k_dev_close(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + /* Stop mailbox service first */ + fm10k_close_mbx_service(hw); + fm10k_dev_stop(dev); + fm10k_stop_hw(hw); +} + static int fm10k_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -977,8 +1187,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) return hw->mbx.ops.connect(hw, &hw->mbx); } +static void +fm10k_close_mbx_service(struct fm10k_hw *hw) +{ + /* Disconnect from SM for PF device or PF for VF device */ + hw->mbx.ops.disconnect(hw, &hw->mbx); +} + static struct eth_dev_ops fm10k_eth_dev_ops = { .dev_configure = fm10k_dev_configure, + .dev_start = fm10k_dev_start, + .dev_stop = fm10k_dev_stop, + .dev_close = fm10k_dev_close, .link_update = fm10k_link_update, .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions 2015-01-30 5:07 ` [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-02-04 2:36 ` Qiu, Michael 2015-02-04 9:55 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Qiu, Michael @ 2015-02-04 2:36 UTC (permalink / raw) To: Chen, Jing D, dev On 1/30/2015 1:08 PM, Chen, Jing D wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > 1. Add function to initialize single RX queue. > 2. Add function to initialize single TX queue. > 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close > functions. > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_pmd_fm10k/fm10k_ethdev.c | 220 +++++++++++++++++++++++++++++++++++ > 1 files changed, 220 insertions(+), 0 deletions(-) > > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c > index b4b49cd..3cf5e25 100644 > --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c > +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c > @@ -49,6 +49,8 @@ > #define CHARS_PER_UINT32 (sizeof(uint32_t)) > #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) > > +static void fm10k_close_mbx_service(struct fm10k_hw *hw); > + > static void > fm10k_mbx_initlock(struct fm10k_hw *hw) > { > @@ -268,6 +270,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) > } > > static int > +fm10k_dev_tx_init(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + int i, ret; > + struct fm10k_tx_queue *txq; > + uint64_t base_addr; > + uint32_t size; > + > + /* Disable TXINT to avoid possible interrupt */ > + for (i = 0; i < hw->mac.max_queues; i++) > + FM10K_WRITE_REG(hw, FM10K_TXINT(i), > + 3 << FM10K_TXINT_TIMER_SHIFT); > + > + /* Setup TX queue */ > + for (i = 0; i < dev->data->nb_tx_queues; ++i) { > + txq = dev->data->tx_queues[i]; > + base_addr = txq->hw_ring_phys_addr; > + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); > + > + /* disable queue to avoid issues while updating state */ > + ret = tx_queue_disable(hw, i); > + if (ret) { > + PMD_LOG(ERR, "failed to disable queue %d\n", i); > + return -1; > + } > + > + /* set location and size for descriptor ring */ > + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), > + base_addr & 0x00000000ffffffffULL); Here better to make a address mask here. > + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), > + base_addr >> (CHAR_BIT * sizeof(uint32_t))); > + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); > + } > + return 0; > +} > + > +static int > +fm10k_dev_rx_init(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + int i, ret; > + struct fm10k_rx_queue *rxq; > + uint64_t base_addr; > + uint32_t size; > + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; > + uint16_t buf_size; > + struct rte_pktmbuf_pool_private *mbp_priv; > + > + /* Disable RXINT to avoid possible interrupt */ > + for (i = 0; i < hw->mac.max_queues; i++) > + FM10K_WRITE_REG(hw, FM10K_RXINT(i), > + 3 << FM10K_RXINT_TIMER_SHIFT); > + > + /* Setup RX queues */ > + for (i = 0; i < dev->data->nb_rx_queues; ++i) { > + rxq = dev->data->rx_queues[i]; > + base_addr = rxq->hw_ring_phys_addr; > + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); > + > + /* disable queue to avoid issues while updating state */ > + ret = rx_queue_disable(hw, i); > + if (ret) { > + PMD_LOG(ERR, "failed to disable queue %d\n", i); > + return -1; > + } > + > + /* Setup the Base and Length of the Rx Descriptor Ring */ > + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), > + base_addr & 0x00000000ffffffffULL); Here better to make a address mask here. Thanks, Michael > + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), > + base_addr >> (CHAR_BIT * sizeof(uint32_t))); > + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); > + > + /* Configure the Rx buffer size for one buff without split */ > + mbp_priv = rte_mempool_get_priv(rxq->mp); > + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - > + RTE_PKTMBUF_HEADROOM); > + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), > + buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); > + > + /* Enable drop on empty, it's RO for VF */ > + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) > + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; > + > + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); > + FM10K_WRITE_FLUSH(hw); > + } > + > + return 0; > +} > + > +static int > fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) > { > struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > @@ -371,6 +465,122 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) > return 0; > } > > +/* fls = find last set bit = 32 minus the number of leading zeros */ > +#ifndef fls > +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) > +#endif > +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) > +static int > +fm10k_dev_start(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + int i, diag; > + > + PMD_FUNC_TRACE(); > + > + /* stop, init, then start the hw */ > + diag = fm10k_stop_hw(hw); > + if (diag != FM10K_SUCCESS) { > + PMD_LOG(ERR, "Hardware stop failed: %d", diag); > + return -EIO; > + } > + > + diag = fm10k_init_hw(hw); > + if (diag != FM10K_SUCCESS) { > + PMD_LOG(ERR, "Hardware init failed: %d", diag); > + return -EIO; > + } > + > + diag = fm10k_start_hw(hw); > + if (diag != FM10K_SUCCESS) { > + PMD_LOG(ERR, "Hardware start failed: %d", diag); > + return -EIO; > + } > + > + diag = fm10k_dev_tx_init(dev); > + if (diag) { > + PMD_LOG(ERR, "TX init failed: %d", diag); > + return diag; > + } > + > + diag = fm10k_dev_rx_init(dev); > + if (diag) { > + PMD_LOG(ERR, "RX init failed: %d", diag); > + return diag; > + } > + > + if (hw->mac.type == fm10k_mac_pf) { > + /* Establish only VSI 0 as valid */ > + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), FM10K_DGLORTMAP_ANY); > + > + /* Configure RSS bits used in RETA table */ > + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), > + fls(dev->data->nb_rx_queues - 1) << > + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); > + > + /* Invalidate all other GLORT entries */ > + for (i = 1; i < FM10K_DGLORT_COUNT; i++) > + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), > + FM10K_DGLORTMAP_NONE); > + } > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + struct fm10k_rx_queue *rxq; > + rxq = dev->data->rx_queues[i]; > + > + if (rxq->rx_deferred_start) > + continue; > + diag = fm10k_dev_rx_queue_start(dev, i); > + if (diag != 0) { > + int j; > + for (j = 0; j < i; ++j) > + rx_queue_clean(dev->data->rx_queues[j]); > + return diag; > + } > + } > + > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + struct fm10k_tx_queue *txq; > + txq = dev->data->tx_queues[i]; > + > + if (txq->tx_deferred_start) > + continue; > + diag = fm10k_dev_tx_queue_start(dev, i); > + if (diag != 0) { > + int j; > + for (j = 0; j < dev->data->nb_rx_queues; ++j) > + rx_queue_clean(dev->data->rx_queues[j]); > + return diag; > + } > + } > + > + return 0; > +} > + > +static void > +fm10k_dev_stop(struct rte_eth_dev *dev) > +{ > + int i; > + PMD_FUNC_TRACE(); > + > + for (i = 0; i < dev->data->nb_tx_queues; i++) > + fm10k_dev_tx_queue_stop(dev, i); > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) > + fm10k_dev_rx_queue_stop(dev, i); > +} > + > +static void > +fm10k_dev_close(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + > + /* Stop mailbox service first */ > + fm10k_close_mbx_service(hw); > + fm10k_dev_stop(dev); > + fm10k_stop_hw(hw); > +} > + > static int > fm10k_link_update(struct rte_eth_dev *dev, > __rte_unused int wait_to_complete) > @@ -977,8 +1187,18 @@ fm10k_setup_mbx_service(struct fm10k_hw *hw) > return hw->mbx.ops.connect(hw, &hw->mbx); > } > > +static void > +fm10k_close_mbx_service(struct fm10k_hw *hw) > +{ > + /* Disconnect from SM for PF device or PF for VF device */ > + hw->mbx.ops.disconnect(hw, &hw->mbx); > +} > + > static struct eth_dev_ops fm10k_eth_dev_ops = { > .dev_configure = fm10k_dev_configure, > + .dev_start = fm10k_dev_start, > + .dev_stop = fm10k_dev_stop, > + .dev_close = fm10k_dev_close, > .link_update = fm10k_link_update, > .stats_get = fm10k_stats_get, > .stats_reset = fm10k_stats_reset, ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions 2015-02-04 2:36 ` Qiu, Michael @ 2015-02-04 9:55 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-04 9:55 UTC (permalink / raw) To: Qiu, Michael, dev Hi Michael, > -----Original Message----- > From: Qiu, Michael > Sent: Wednesday, February 04, 2015 10:36 AM > To: Chen, Jing D; dev@dpdk.org > Cc: Zhang, Helin; Shaw, Jeffrey B > Subject: Re: [PATCH 10/18] fm10k: add dev start/stop functions > > On 1/30/2015 1:08 PM, Chen, Jing D wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > 1. Add function to initialize single RX queue. > > 2. Add function to initialize single TX queue. > > 3. Add fm10k_dev_start, fm10k_dev_stop and fm10k_dev_close > > functions. > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > lib/librte_pmd_fm10k/fm10k_ethdev.c | 220 > +++++++++++++++++++++++++++++++++++ > > 1 files changed, 220 insertions(+), 0 deletions(-) > > > > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c > b/lib/librte_pmd_fm10k/fm10k_ethdev.c > > index b4b49cd..3cf5e25 100644 > > --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c > > +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c > > @@ -49,6 +49,8 @@ > > #define CHARS_PER_UINT32 (sizeof(uint32_t)) > > #define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1) > > > > +static void fm10k_close_mbx_service(struct fm10k_hw *hw); > > + > > static void > > fm10k_mbx_initlock(struct fm10k_hw *hw) > > { > > @@ -268,6 +270,98 @@ fm10k_dev_configure(struct rte_eth_dev *dev) > > } > > > > static int > > +fm10k_dev_tx_init(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + int i, ret; > > + struct fm10k_tx_queue *txq; > > + uint64_t base_addr; > > + uint32_t size; > > + > > + /* Disable TXINT to avoid possible interrupt */ > > + for (i = 0; i < hw->mac.max_queues; i++) > > + FM10K_WRITE_REG(hw, FM10K_TXINT(i), > > + 3 << FM10K_TXINT_TIMER_SHIFT); > > + > > + /* Setup TX queue */ > > + for (i = 0; i < dev->data->nb_tx_queues; ++i) { > > + txq = dev->data->tx_queues[i]; > > + base_addr = txq->hw_ring_phys_addr; > > + size = txq->nb_desc * sizeof(struct fm10k_tx_desc); > > + > > + /* disable queue to avoid issues while updating state */ > > + ret = tx_queue_disable(hw, i); > > + if (ret) { > > + PMD_LOG(ERR, "failed to disable queue %d\n", i); > > + return -1; > > + } > > + > > + /* set location and size for descriptor ring */ > > + FM10K_WRITE_REG(hw, FM10K_TDBAL(i), > > + base_addr & 0x00000000ffffffffULL); > > Here better to make a address mask here. OK. Thanks for pointing it out. > > > + FM10K_WRITE_REG(hw, FM10K_TDBAH(i), > > + base_addr >> (CHAR_BIT * sizeof(uint32_t))); > > + FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); > > + } > > + return 0; > > +} > > + > > +static int > > +fm10k_dev_rx_init(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + int i, ret; > > + struct fm10k_rx_queue *rxq; > > + uint64_t base_addr; > > + uint32_t size; > > + uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; > > + uint16_t buf_size; > > + struct rte_pktmbuf_pool_private *mbp_priv; > > + > > + /* Disable RXINT to avoid possible interrupt */ > > + for (i = 0; i < hw->mac.max_queues; i++) > > + FM10K_WRITE_REG(hw, FM10K_RXINT(i), > > + 3 << FM10K_RXINT_TIMER_SHIFT); > > + > > + /* Setup RX queues */ > > + for (i = 0; i < dev->data->nb_rx_queues; ++i) { > > + rxq = dev->data->rx_queues[i]; > > + base_addr = rxq->hw_ring_phys_addr; > > + size = rxq->nb_desc * sizeof(union fm10k_rx_desc); > > + > > + /* disable queue to avoid issues while updating state */ > > + ret = rx_queue_disable(hw, i); > > + if (ret) { > > + PMD_LOG(ERR, "failed to disable queue %d\n", i); > > + return -1; > > + } > > + > > + /* Setup the Base and Length of the Rx Descriptor Ring */ > > + FM10K_WRITE_REG(hw, FM10K_RDBAL(i), > > + base_addr & 0x00000000ffffffffULL); > > Here better to make a address mask here. > OK. Thanks for pointing it out. > Thanks, > Michael > > + FM10K_WRITE_REG(hw, FM10K_RDBAH(i), > > + base_addr >> (CHAR_BIT * sizeof(uint32_t))); > > + FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); > > + > > + /* Configure the Rx buffer size for one buff without split */ > > + mbp_priv = rte_mempool_get_priv(rxq->mp); > > + buf_size = (uint16_t) (mbp_priv->mbuf_data_room_size - > > + RTE_PKTMBUF_HEADROOM); > > + FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), > > + buf_size >> > FM10K_SRRCTL_BSIZEPKT_SHIFT); > > + > > + /* Enable drop on empty, it's RO for VF */ > > + if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) > > + rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; > > + > > + FM10K_WRITE_REG(hw, FM10K_RXDCTL(i), rxdctl); > > + FM10K_WRITE_FLUSH(hw); > > + } > > + > > + return 0; > > +} > > + > > +static int > > fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t > rx_queue_id) > > { > > struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > @@ -371,6 +465,122 @@ fm10k_dev_tx_queue_stop(struct rte_eth_dev > *dev, uint16_t tx_queue_id) > > return 0; > > } > > > > +/* fls = find last set bit = 32 minus the number of leading zeros */ > > +#ifndef fls > > +#define fls(x) (((x) == 0) ? 0 : (32 - __builtin_clz((x)))) > > +#endif > > +#define BSIZEPKT_ROUNDUP ((1 << FM10K_SRRCTL_BSIZEPKT_SHIFT) - 1) > > +static int > > +fm10k_dev_start(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + int i, diag; > > + > > + PMD_FUNC_TRACE(); > > + > > + /* stop, init, then start the hw */ > > + diag = fm10k_stop_hw(hw); > > + if (diag != FM10K_SUCCESS) { > > + PMD_LOG(ERR, "Hardware stop failed: %d", diag); > > + return -EIO; > > + } > > + > > + diag = fm10k_init_hw(hw); > > + if (diag != FM10K_SUCCESS) { > > + PMD_LOG(ERR, "Hardware init failed: %d", diag); > > + return -EIO; > > + } > > + > > + diag = fm10k_start_hw(hw); > > + if (diag != FM10K_SUCCESS) { > > + PMD_LOG(ERR, "Hardware start failed: %d", diag); > > + return -EIO; > > + } > > + > > + diag = fm10k_dev_tx_init(dev); > > + if (diag) { > > + PMD_LOG(ERR, "TX init failed: %d", diag); > > + return diag; > > + } > > + > > + diag = fm10k_dev_rx_init(dev); > > + if (diag) { > > + PMD_LOG(ERR, "RX init failed: %d", diag); > > + return diag; > > + } > > + > > + if (hw->mac.type == fm10k_mac_pf) { > > + /* Establish only VSI 0 as valid */ > > + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(0), > FM10K_DGLORTMAP_ANY); > > + > > + /* Configure RSS bits used in RETA table */ > > + FM10K_WRITE_REG(hw, FM10K_DGLORTDEC(0), > > + fls(dev->data->nb_rx_queues - 1) << > > + FM10K_DGLORTDEC_RSSLENGTH_SHIFT); > > + > > + /* Invalidate all other GLORT entries */ > > + for (i = 1; i < FM10K_DGLORT_COUNT; i++) > > + FM10K_WRITE_REG(hw, FM10K_DGLORTMAP(i), > > + FM10K_DGLORTMAP_NONE); > > + } > > + > > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > > + struct fm10k_rx_queue *rxq; > > + rxq = dev->data->rx_queues[i]; > > + > > + if (rxq->rx_deferred_start) > > + continue; > > + diag = fm10k_dev_rx_queue_start(dev, i); > > + if (diag != 0) { > > + int j; > > + for (j = 0; j < i; ++j) > > + rx_queue_clean(dev->data->rx_queues[j]); > > + return diag; > > + } > > + } > > + > > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > > + struct fm10k_tx_queue *txq; > > + txq = dev->data->tx_queues[i]; > > + > > + if (txq->tx_deferred_start) > > + continue; > > + diag = fm10k_dev_tx_queue_start(dev, i); > > + if (diag != 0) { > > + int j; > > + for (j = 0; j < dev->data->nb_rx_queues; ++j) > > + rx_queue_clean(dev->data->rx_queues[j]); > > + return diag; > > + } > > + } > > + > > + return 0; > > +} > > + > > +static void > > +fm10k_dev_stop(struct rte_eth_dev *dev) > > +{ > > + int i; > > + PMD_FUNC_TRACE(); > > + > > + for (i = 0; i < dev->data->nb_tx_queues; i++) > > + fm10k_dev_tx_queue_stop(dev, i); > > + > > + for (i = 0; i < dev->data->nb_rx_queues; i++) > > + fm10k_dev_rx_queue_stop(dev, i); > > +} > > + > > +static void > > +fm10k_dev_close(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + > > + /* Stop mailbox service first */ > > + fm10k_close_mbx_service(hw); > > + fm10k_dev_stop(dev); > > + fm10k_stop_hw(hw); > > +} > > + > > static int > > fm10k_link_update(struct rte_eth_dev *dev, > > __rte_unused int wait_to_complete) > > @@ -977,8 +1187,18 @@ fm10k_setup_mbx_service(struct fm10k_hw > *hw) > > return hw->mbx.ops.connect(hw, &hw->mbx); > > } > > > > +static void > > +fm10k_close_mbx_service(struct fm10k_hw *hw) > > +{ > > + /* Disconnect from SM for PF device or PF for VF device */ > > + hw->mbx.ops.disconnect(hw, &hw->mbx); > > +} > > + > > static struct eth_dev_ops fm10k_eth_dev_ops = { > > .dev_configure = fm10k_dev_configure, > > + .dev_start = fm10k_dev_start, > > + .dev_stop = fm10k_dev_stop, > > + .dev_close = fm10k_dev_close, > > .link_update = fm10k_link_update, > > .stats_get = fm10k_stats_get, > > .stats_reset = fm10k_stats_reset, ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 11/18] fm10k: add receive and tranmit function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (9 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support Chen Jing D(Mark) ` (7 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_pkts and fm10k_xmit_pkts functions. 2. Link app function pointer to actual fm10k recv/xmit functions. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 7 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 2 + lib/librte_pmd_fm10k/fm10k_rxtx.c | 299 +++++++++++++++++++++++++++++++++++ 3 files changed, 308 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index 6e9effa..bf8b132 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -280,4 +280,11 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) return 0; } + +/* Rx and Tx prototypes */ +uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 3cf5e25..9907906 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1224,6 +1224,8 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, PMD_FUNC_TRACE(); dev->dev_ops = &fm10k_eth_dev_ops; + dev->rx_pkt_burst = &fm10k_recv_pkts; + dev->tx_pkt_burst = &fm10k_xmit_pkts; /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index e69de29..952bb0e 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -0,0 +1,299 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2013-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include <rte_ethdev.h> +#include <rte_common.h> +#include "fm10k.h" +#include "SHARED/fm10k_type.h" + +#ifdef RTE_PMD_PACKET_PREFETCH +#define rte_packet_prefetch(p) rte_prefetch1(p) +#else +#define rte_packet_prefetch(p) do {} while (0) +#endif + +static inline void dump_rxd(union fm10k_rx_desc *rxd) +{ +#ifndef RTE_LIBRTE_FM10K_DEBUG_RX + RTE_SET_USED(rxd); +#endif + PMD_LOG_RX(DEBUG, "+----------------|----------------+"); + PMD_LOG_RX(DEBUG, "| GLORT | PKT HDR & TYPE |"); + PMD_LOG_RX(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.glort, + rxd->d.data); + PMD_LOG_RX(DEBUG, "+----------------|----------------+"); + PMD_LOG_RX(DEBUG, "| VLAN & LEN | STATUS |"); + PMD_LOG_RX(DEBUG, "| 0x%08x | 0x%08x |", rxd->d.vlan_len, + rxd->d.staterr); + PMD_LOG_RX(DEBUG, "+----------------|----------------+"); + PMD_LOG_RX(DEBUG, "| RESERVED | RSS_HASH |"); + PMD_LOG_RX(DEBUG, "| 0x%08x | 0x%08x |", 0, rxd->d.rss); + PMD_LOG_RX(DEBUG, "+----------------|----------------+"); + PMD_LOG_RX(DEBUG, "| TIME TAG |"); + PMD_LOG_RX(DEBUG, "| 0x%016lx |", rxd->q.timestamp); + PMD_LOG_RX(DEBUG, "+----------------|----------------+"); +} + +static inline void +rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d) +{ + uint16_t ptype; + static const uint16_t pt_lut[] = { 0, + PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, + PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT, + 0, 0, 0 + }; + + if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK) + m->ol_flags |= PKT_RX_RSS_HASH; + + if ((d->d.staterr & (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) == + (FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) + m->ol_flags |= PKT_RX_IP_CKSUM_BAD; + + if ((d->d.staterr & (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) == + (FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) + m->ol_flags |= PKT_RX_L4_CKSUM_BAD; + + if (d->d.staterr & FM10K_RXD_STATUS_VEXT) + m->ol_flags |= PKT_RX_VLAN_PKT; + + if (d->d.staterr & FM10K_RXD_STATUS_HBO) + m->ol_flags |= PKT_RX_HBUF_OVERFLOW; + + if (d->d.staterr & FM10K_RXD_STATUS_RXE) + m->ol_flags |= PKT_RX_RECIP_ERR; + + ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >> + FM10K_RXD_PKTTYPE_SHIFT; + m->ol_flags |= pt_lut[(uint8_t)ptype]; +} + +uint16_t +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + int alloc = 0; + uint16_t next_dd; + + next_dd = q->next_dd; + + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_pkts; ++count) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + mbuf->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(mbuf, &desc); +#endif + + mbuf->hash.rss = desc.d.rss; + + rx_pkts[count] = mbuf; + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + } + + q->next_dd = next_dd; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return count; +} + +static inline void tx_free_descriptors(struct fm10k_tx_queue *q) +{ + uint16_t next_rs, count = 0; + + next_rs = fifo_peek(&q->rs_tracker); + if (!(q->hw_ring[next_rs].flags & FM10K_TXD_FLAG_DONE)) + return; + + /* the DONE flag is set on this descriptor so remove the ID + * from the RS bit tracker and free the buffers */ + fifo_remove(&q->rs_tracker); + + /* wrap around? if so, free buffers from last_free up to but NOT + * including nb_desc */ + if (q->last_free > next_rs) { + count = q->nb_desc - q->last_free; + while (q->last_free < q->nb_desc) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + q->last_free = 0; + } + + /* adjust free descriptor count before the next loop */ + q->nb_free += count + (next_rs + 1 - q->last_free); + + /* free buffers from last_free, up to and including next_rs */ + while (q->last_free <= next_rs) { + rte_pktmbuf_free_seg(q->sw_ring[q->last_free]); + q->sw_ring[q->last_free] = NULL; + ++q->last_free; + } + + if (q->last_free == q->nb_desc) + q->last_free = 0; +} + +static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb) +{ + uint16_t last_id; + uint8_t flags; + + /* always set the LAST flag on the last descriptor used to + * transmit the packet */ + flags = FM10K_TXD_FLAG_LAST; + last_id = q->next_free + mb->nb_segs - 1; + if (last_id >= q->nb_desc) + last_id = last_id - q->nb_desc; + + /* but only set the RS flag on the last descriptor if rs_thresh + * descriptors will be used since the RS flag was last set */ + if ((q->nb_used + mb->nb_segs) >= q->rs_thresh) { + flags |= FM10K_TXD_FLAG_RS; + fifo_insert(&q->rs_tracker, last_id); + q->nb_used = 0; + } else { + q->nb_used = q->nb_used + mb->nb_segs; + } + + q->hw_ring[last_id].flags = flags; + q->nb_free -= mb->nb_segs; + + /* set checksum flags on first descriptor of packet. SCTP checksum + * offload is not supported, but we do not explicitely check for this + * case in favor of greatly simplified processing. */ + if (mb->ol_flags & (PKT_TX_IPV4_CSUM | PKT_TX_L4_MASK)) + q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM; + + /* set vlan if requested */ + if (mb->ol_flags & PKT_TX_VLAN_PKT) + q->hw_ring[q->next_free].vlan = mb->vlan_tci; + + /* fill up the rings */ + for (; mb != NULL; mb = mb->next) { + q->sw_ring[q->next_free] = mb; + q->hw_ring[q->next_free].buffer_addr = + rte_cpu_to_le_64(MBUF_DMA_ADDR(mb)); + q->hw_ring[q->next_free].buflen = + rte_cpu_to_le_16(rte_pktmbuf_data_len(mb)); + if (++q->next_free == q->nb_desc) + q->next_free = 0; + } +} + +uint16_t +fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct fm10k_tx_queue *q = tx_queue; + struct rte_mbuf *mb; + uint16_t count; + + for (count = 0; count < nb_pkts; ++count) { + mb = tx_pkts[count]; + + /* running low on descriptors? try to free some... */ + if (q->nb_free < q->free_trigger) + tx_free_descriptors(q); + + /* make sure there are enough free descriptors to transmit the + * entire packet before doing anything */ + if (q->nb_free < mb->nb_segs) + break; + + /* sanity check to make sure the mbuf is valid */ + if ((mb->nb_segs == 0) || + ((mb->nb_segs > 1) && (mb->next == NULL))) + break; + + /* process the packet */ + tx_xmit_pkt(q, mb); + } + + /* update the tail pointer if any packets were processed */ + if (count > 0) + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_free); + + return count; +} -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (10 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 11/18] fm10k: add receive and tranmit function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-02-01 0:38 ` Neil Horman 2015-01-30 5:07 ` [dpdk-dev] [PATCH 13/18] fm10k: Add scatter receive function Chen Jing D(Mark) ` (6 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Configure RSS in fm10k_dev_rx_init function. 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get and inquery RSS configuration. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ 1 files changed, 156 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 9907906..4711047 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -269,6 +269,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t mrqc, *key, i, reta, j; + uint64_t hf; + +#define RSS_KEY_SIZE 40 + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, + }; + + if (dev->data->nb_rx_queues == 1 || + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) + return; + + /* random key is rss_intel_key (default) or user provided (rss_key) */ + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) + key = (uint32_t *)rss_intel_key; + else + key = (uint32_t *)dev_conf->rx_adv_conf.rss_conf.rss_key; + + /* Now fill our hash function seeds, 4 bytes at a time */ + for (i = 0; i < RSS_KEY_SIZE / sizeof(*key); ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + reta = 0; + for (i = 0, j = 0; i < FM10K_RETA_SIZE; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta << CHAR_BIT) | j; + if ((i & 3) == 3) + FM10K_WRITE_REG(hw, FM10K_RETA(0, i >> 2), + rte_bswap32(reta)); + } + + /* + * Generate RSS hash based on packet types, TCP/UDP + * port numbers and/or IPv4/v6 src and dst addresses + */ + hf = dev_conf->rx_adv_conf.rss_conf.rss_hf; + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + if (mrqc == 0) { + PMD_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not supported\n", + hf); + return; + } + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); +} + static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { @@ -358,6 +430,8 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + /* Configure RSS if applicable */ + fm10k_dev_mq_rx_configure(dev); return 0; } @@ -1146,6 +1220,86 @@ fm10k_reta_query(struct rte_eth_dev *dev, return 0; } +static int +fm10k_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf = rss_conf->rss_hf; + int i; + + PMD_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (hf == 0) + return -EINVAL; + + mrqc = 0; + mrqc |= (hf & ETH_RSS_IPV4_TCP) ? FM10K_MRQC_TCP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV4) ? FM10K_MRQC_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_EX) ? FM10K_MRQC_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_TCP_EX) ? FM10K_MRQC_TCP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV4_UDP) ? FM10K_MRQC_UDP_IPV4 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP) ? FM10K_MRQC_UDP_IPV6 : 0; + mrqc |= (hf & ETH_RSS_IPV6_UDP_EX) ? FM10K_MRQC_UDP_IPV6 : 0; + + /* If the mapping doesn't fit any supported, return */ + if (mrqc == 0) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + FM10K_WRITE_REG(hw, FM10K_RSSRK(0, i), key[i]); + + FM10K_WRITE_REG(hw, FM10K_MRQC(0), mrqc); + + return 0; +} + +static int +fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *key = (uint32_t *)rss_conf->rss_key; + uint32_t mrqc; + uint64_t hf; + int i; + + PMD_FUNC_TRACE(); + + if (rss_conf->rss_key_len < FM10K_RSSRK_SIZE * + FM10K_RSSRK_ENTRIES_PER_REG) + return -EINVAL; + + if (key != NULL) + for (i = 0; i < FM10K_RSSRK_SIZE; ++i) + key[i] = FM10K_READ_REG(hw, FM10K_RSSRK(0, i)); + + mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0)); + hf = 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_IPV4_TCP : 0; + hf |= (mrqc & FM10K_MRQC_IPV4) ? ETH_RSS_IPV4 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6 : 0; + hf |= (mrqc & FM10K_MRQC_IPV6) ? ETH_RSS_IPV6_EX : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP : 0; + hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_IPV4_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP : 0; + hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX : 0; + + rss_conf->rss_hf = hf; + + return 0; +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1213,6 +1367,8 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .tx_queue_release = fm10k_tx_queue_release, .reta_update = fm10k_reta_update, .reta_query = fm10k_reta_query, + .rss_hash_update = fm10k_rss_hash_update, + .rss_hash_conf_get = fm10k_rss_hash_conf_get, }; static int -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support 2015-01-30 5:07 ` [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-02-01 0:38 ` Neil Horman 0 siblings, 0 replies; 147+ messages in thread From: Neil Horman @ 2015-02-01 0:38 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:28PM +0800, Chen Jing D(Mark) wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > 1. Configure RSS in fm10k_dev_rx_init function. > 2. Add fm10k_rss_hash_update and fm10k_rss_hash_conf_get to get > and inquery RSS configuration. > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_pmd_fm10k/fm10k_ethdev.c | 156 +++++++++++++++++++++++++++++++++++ > 1 files changed, 156 insertions(+), 0 deletions(-) > > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c > index 9907906..4711047 100644 > --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c > +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c > @@ -269,6 +269,78 @@ fm10k_dev_configure(struct rte_eth_dev *dev) > return 0; > } > > +static void > +fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; > + uint32_t mrqc, *key, i, reta, j; > + uint64_t hf; > + > +#define RSS_KEY_SIZE 40 > + static uint8_t rss_intel_key[RSS_KEY_SIZE] = { > + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, > + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, > + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, > + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, > + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, > + }; > + > + if (dev->data->nb_rx_queues == 1 || > + dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS || > + dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) > + return; > + > + /* random key is rss_intel_key (default) or user provided (rss_key) */ > + if (dev_conf->rx_adv_conf.rss_conf.rss_key == NULL) > + key = (uint32_t *)rss_intel_key; Its not really random if its statically allocated and assigned. The kernel has started randomizing these default keys, it seems like a sensible thing to do here as well, as this is the init path and not performance sensitive Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 13/18] fm10k: Add scatter receive function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (11 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 14/18] fm10k: add function to set vlan Chen Jing D(Mark) ` (5 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add fm10k_recv_scattered_pkts function to receive jumbo frame and multi-segment packets. 2. Configure correct receive function in rx_init and dev_init. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k.h | 3 + lib/librte_pmd_fm10k/fm10k_ethdev.c | 15 ++++ lib/librte_pmd_fm10k/fm10k_rxtx.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k.h b/lib/librte_pmd_fm10k/fm10k.h index bf8b132..8bdefad 100644 --- a/lib/librte_pmd_fm10k/fm10k.h +++ b/lib/librte_pmd_fm10k/fm10k.h @@ -285,6 +285,9 @@ fm10k_addr_alignment_valid(struct rte_mbuf *mb) uint16_t fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t fm10k_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t fm10k_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); #endif diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 4711047..b231c31 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -422,6 +422,13 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT); + /* It adds dual VLAN length for supporting dual VLAN */ + if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * FM10K_VLAN_TAG_SIZE) > buf_size){ + dev->data->scattered_rx = 1; + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + } + /* Enable drop on empty, it's RO for VF */ if (hw->mac.type == fm10k_mac_pf && rxq->drop_en) rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY; @@ -430,6 +437,11 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_WRITE_FLUSH(hw); } + if (dev->data->dev_conf.rxmode.enable_scatter) { + dev->rx_pkt_burst = fm10k_recv_scattered_pkts; + dev->data->scattered_rx = 1; + } + /* Configure RSS if applicable */ fm10k_dev_mq_rx_configure(dev); return 0; @@ -1383,6 +1395,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, dev->rx_pkt_burst = &fm10k_recv_pkts; dev->tx_pkt_burst = &fm10k_xmit_pkts; + if (dev->data->scattered_rx) + dev->rx_pkt_burst = &fm10k_recv_scattered_pkts; + /* only initialize in the primary process */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c index 952bb0e..0400b69 100644 --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c @@ -177,6 +177,134 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return count; } +uint16_t +fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf *mbuf; + union fm10k_rx_desc desc; + struct fm10k_rx_queue *q = rx_queue; + uint16_t count = 0; + uint16_t nb_rcv, nb_seg; + int alloc = 0; + uint16_t next_dd; + struct rte_mbuf *first_seg = q->pkt_first_seg; + struct rte_mbuf *last_seg = q->pkt_last_seg; + + next_dd = q->next_dd; + nb_rcv = 0; + + nb_seg = RTE_MIN(nb_pkts, q->alloc_thresh); + for (count = 0; count < nb_seg; count++) { + mbuf = q->sw_ring[next_dd]; + desc = q->hw_ring[next_dd]; + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) + break; +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX + dump_rxd(&desc); +#endif + + if (++next_dd == q->nb_desc) { + next_dd = 0; + alloc = 1; + } + + /* Prefetch next mbuf while processing current one. */ + rte_prefetch0(q->sw_ring[next_dd]); + + /* + * When next RX descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 8 pointers + * to mbufs. + */ + if ((next_dd & 0x3) == 0) { + rte_prefetch0(&q->hw_ring[next_dd]); + rte_prefetch0(&q->sw_ring[next_dd]); + } + + /* Fill data length */ + rte_pktmbuf_data_len(mbuf) = desc.w.length; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = mbuf; + first_seg->pkt_len = desc.w.length; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rte_pktmbuf_data_len(mbuf)); + first_seg->nb_segs++; + last_seg->next = mbuf; + } + + /* + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(desc.d.staterr & FM10K_RXD_STATUS_EOP)) { + last_seg = mbuf; + continue; + } + + first_seg->ol_flags = 0; +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE + rx_desc_to_ol_flags(first_seg, &desc); +#endif + first_seg->hash.rss = desc.d.rss; + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rcv++] = first_seg; + + /* + * Setup receipt context for a new packet. + */ + first_seg = NULL; + } + + q->next_dd = next_dd; + q->pkt_first_seg = first_seg; + q->pkt_last_seg = last_seg; + + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], + q->alloc_thresh); + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { + mbuf = q->sw_ring[q->next_alloc]; + + /* setup static mbuf fields */ + fm10k_pktmbuf_reset(mbuf, q->port_id); + + /* write descriptor */ + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); + q->hw_ring[q->next_alloc] = desc; + } + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); + q->next_trigger += q->alloc_thresh; + if (q->next_trigger >= q->nb_desc) { + q->next_trigger = q->alloc_thresh - 1; + q->next_alloc = 0; + } + } + + return nb_rcv; +} + static inline void tx_free_descriptors(struct fm10k_tx_queue *q) { uint16_t next_rs, count = 0; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 14/18] fm10k: add function to set vlan 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (12 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 13/18] fm10k: Add scatter receive function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 15/18] fm10k: Add SRIOV-VF support Chen Jing D(Mark) ` (4 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Add fm10k_vlan_filter_set to set vlan. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index b231c31..daa687e 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -772,6 +772,19 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev, } +static int +fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + PMD_FUNC_TRACE(); + + /* @todo - add support for the VF */ + if (hw->mac.type != fm10k_mac_pf) + return -ENOTSUP; + + return fm10k_update_vlan(hw, vlan_id, 0, on); +} + static inline int check_nb_desc(uint16_t min, uint16_t max, uint16_t mult, uint16_t request) { @@ -1369,6 +1382,7 @@ static struct eth_dev_ops fm10k_eth_dev_ops = { .stats_get = fm10k_stats_get, .stats_reset = fm10k_stats_reset, .dev_infos_get = fm10k_dev_infos_get, + .vlan_filter_set = fm10k_vlan_filter_set, .rx_queue_start = fm10k_dev_rx_queue_start, .rx_queue_stop = fm10k_dev_rx_queue_stop, .tx_queue_start = fm10k_dev_tx_queue_start, -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 15/18] fm10k: Add SRIOV-VF support 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (13 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 14/18] fm10k: add function to set vlan Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) ` (3 subsequent siblings) 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> fm10k pmd driver will support both PF and VF device with single copy of code. The reason is NIC maps registers with same function in PF and VF to same PCI I/O address. Then, PF/VF drivers use same address to access registers belonging to it, HW will translatethe request to correct units. For some functionalities that is unique to PF, driver will check current driver type and behave correctly. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index daa687e..40e3a2b 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1541,6 +1541,7 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, */ static struct rte_pci_id pci_id_fm10k_map[] = { #define RTE_PCI_DEV_ID_DECL_FM10K(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, +#define RTE_PCI_DEV_ID_DECL_FM10KVF(vend, dev) { RTE_PCI_DEVICE(vend, dev) }, #include "rte_pci_dev_ids.h" { .vendor_id = 0, /* sentinel */ }, }; -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (14 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 15/18] fm10k: Add SRIOV-VF support Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-02-01 0:42 ` Neil Horman 2015-01-30 5:07 ` [dpdk-dev] [PATCH 17/18] Change lib/Makefile to add fm10k driver into compile list Chen Jing D(Mark) ` (2 subsequent siblings) 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> 1. Add 2 interrupt handling functions, one for PF and one for VF. 2. Enable interrupt after completing initialization of NIC. Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ 1 files changed, 268 insertions(+), 0 deletions(-) diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c index 40e3a2b..685fa8f 100644 --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c @@ -1325,6 +1325,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, return 0; } +static void +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); + + /* Enable misc causes */ + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | + FM10K_EIMR_ENABLE(THI_FAULT) | + FM10K_EIMR_ENABLE(FUM_FAULT) | + FM10K_EIMR_ENABLE(MAILBOX) | + FM10K_EIMR_ENABLE(SWITCHREADY) | + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | + FM10K_EIMR_ENABLE(SRAMERROR) | + FM10K_EIMR_ENABLE(VFLR)); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static void +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; + + /* Bind all local non-queue interrupt to vector 0 */ + int_map |= 0; + + /* Only INT 0 availiable, other 15 are reserved. */ + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); + + /* Enable ITR 0 */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + FM10K_WRITE_FLUSH(hw); +} + +static int +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) +{ + struct fm10k_fault fault; + int err; + const char *estr = "Unknown error"; + + /* Process PCA fault */ + if (eicr & FM10K_EIMR_PCA_FAULT) { + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case PCA_NO_FAULT: + estr = "PCA_NO_FAULT"; break; + case PCA_UNMAPPED_ADDR: + estr = "PCA_UNMAPPED_ADDR"; break; + case PCA_BAD_QACCESS_PF: + estr = "PCA_BAD_QACCESS_PF"; break; + case PCA_BAD_QACCESS_VF: + estr = "PCA_BAD_QACCESS_VF"; break; + case PCA_MALICIOUS_REQ: + estr = "PCA_MALICIOUS_REQ"; break; + case PCA_POISONED_TLP: + estr = "PCA_POISONED_TLP"; break; + case PCA_TLP_ABORT: + estr = "PCA_TLP_ABORT"; break; + default: + goto error; + } + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process THI fault */ + if (eicr & FM10K_EIMR_THI_FAULT) { + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case THI_NO_FAULT: + estr = "THI_NO_FAULT"; break; + case THI_MAL_DIS_Q_FAULT: + estr = "THI_MAL_DIS_Q_FAULT"; break; + default: + goto error; + } + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + /* Process FUM fault */ + if (eicr & FM10K_EIMR_FUM_FAULT) { + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); + if (err) + goto error; + switch (fault.type) { + case FUM_NO_FAULT: + estr = "FUM_NO_FAULT"; break; + case FUM_UNMAPPED_ADDR: + estr = "FUM_UNMAPPED_ADDR"; break; + case FUM_POISONED_TLP: + estr = "FUM_POISONED_TLP"; break; + case FUM_BAD_VF_QACCESS: + estr = "FUM_BAD_VF_QACCESS"; break; + case FUM_ADD_DECODE_ERR: + estr = "FUM_ADD_DECODE_ERR"; break; + case FUM_RO_ERROR: + estr = "FUM_RO_ERROR"; break; + case FUM_QPRC_CRC_ERROR: + estr = "FUM_QPRC_CRC_ERROR"; break; + case FUM_CSR_TIMEOUT: + estr = "FUM_CSR_TIMEOUT"; break; + case FUM_INVALID_TYPE: + estr = "FUM_INVALID_TYPE"; break; + case FUM_INVALID_LENGTH: + estr = "FUM_INVALID_LENGTH"; break; + case FUM_INVALID_BE: + estr = "FUM_INVALID_BE"; break; + case FUM_INVALID_ALIGN: + estr = "FUM_INVALID_ALIGN"; break; + default: + goto error; + } + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", + estr, fault.func ? "VF" : "PF", fault.func, + fault.address, fault.specinfo); + } + + if (estr) + return 0; + return 0; +error: + PMD_LOG(ERR, "Failed to handle fault event."); + return err; +} + +/** + * PF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_pf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t cause, status; + + if (hw->mac.type != fm10k_mac_pf) + return; + + cause = FM10K_READ_REG(hw, FM10K_EICR); + + /* Handle PCI fault cases */ + if (cause & FM10K_EICR_FAULT_MASK) { + PMD_LOG(ERR, "INT: find fault!"); + fm10k_dev_handle_fault(hw, cause); + } + + /* Handle switch up/down */ + if (cause & FM10K_EICR_SWITCHNOTREADY) + PMD_LOG(ERR, "INT: Switch is not ready"); + + if (cause & FM10K_EICR_SWITCHREADY) + PMD_LOG(ERR, "INT: Switch is ready"); + + /* Handle mailbox message */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Handle SRAM error */ + if (cause & FM10K_EICR_SRAMERROR) { + PMD_LOG(ERR, "INT: SRAM error on PEP"); + + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); + /* Write to clear pending bits */ + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); + + /* Todo: print out error message after shared code updates */ + } + + /* Clear these 3 events if having any */ + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | + FM10K_EICR_SWITCHREADY; + if (cause) + FM10K_WRITE_REG(hw, FM10K_EICR, cause); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * VF interrupt handler triggered by NIC for handling specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +fm10k_dev_interrupt_handler_vf( + __rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (hw->mac.type != fm10k_mac_vf) + return; + + /* Handle mailbox message if lock is acquired */ + fm10k_mbx_lock(hw); + hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + + /* Re-enable interrupt from device side */ + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + /* Re-enable interrupt from host side */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + /* Mailbox message handler in VF */ static const struct fm10k_msg_data fm10k_msgdata_vf[] = { FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), @@ -1503,6 +1753,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, return -EIO; } + /*PF/VF has different interrupt handling mechanism */ + if (hw->mac.type == fm10k_mac_pf) { + /* register callback func to eal lib */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_pf, (void *)dev); + + /* enable MISC interrupt */ + fm10k_dev_enable_intr_pf(dev); + } else { /* VF */ + rte_intr_callback_register(&(dev->pci_dev->intr_handle), + fm10k_dev_interrupt_handler_vf, (void *)dev); + + fm10k_dev_enable_intr_vf(dev); + } + /* * Below function will trigger operations on mailbox, acquire lock to * avoid race condition from interrupt handler. Operations on mailbox @@ -1532,6 +1797,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, fm10k_mbx_unlock(hw); + /* enable uio intr after callback registered */ + rte_intr_enable(&(dev->pci_dev->intr_handle)); + return 0; } -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function 2015-01-30 5:07 ` [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-02-01 0:42 ` Neil Horman 2015-02-02 7:59 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-02-01 0:42 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:32PM +0800, Chen Jing D(Mark) wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > 1. Add 2 interrupt handling functions, one for PF and one for VF. > 2. Enable interrupt after completing initialization of NIC. > This seems to do way more than enable interrupt handling. Can you be a bit more desriptive here? Neil > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 +++++++++++++++++++++++++++++++++++ > 1 files changed, 268 insertions(+), 0 deletions(-) > > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c b/lib/librte_pmd_fm10k/fm10k_ethdev.c > index 40e3a2b..685fa8f 100644 > --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c > +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c > @@ -1325,6 +1325,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev, > return 0; > } > > +static void > +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; > + > + /* Bind all local non-queue interrupt to vector 0 */ > + int_map |= 0; > + > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), int_map); > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), int_map); > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), int_map); > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), int_map); > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), int_map); > + > + /* Enable misc causes */ > + FM10K_WRITE_REG(hw, FM10K_EIMR, FM10K_EIMR_ENABLE(PCA_FAULT) | > + FM10K_EIMR_ENABLE(THI_FAULT) | > + FM10K_EIMR_ENABLE(FUM_FAULT) | > + FM10K_EIMR_ENABLE(MAILBOX) | > + FM10K_EIMR_ENABLE(SWITCHREADY) | > + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | > + FM10K_EIMR_ENABLE(SRAMERROR) | > + FM10K_EIMR_ENABLE(VFLR)); > + > + /* Enable ITR 0 */ > + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | > + FM10K_ITR_MASK_CLEAR); > + FM10K_WRITE_FLUSH(hw); > +} > + > +static void > +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) > +{ > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; > + > + /* Bind all local non-queue interrupt to vector 0 */ > + int_map |= 0; > + > + /* Only INT 0 availiable, other 15 are reserved. */ > + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); > + > + /* Enable ITR 0 */ > + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | > + FM10K_ITR_MASK_CLEAR); > + FM10K_WRITE_FLUSH(hw); > +} > + > +static int > +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) > +{ > + struct fm10k_fault fault; > + int err; > + const char *estr = "Unknown error"; > + > + /* Process PCA fault */ > + if (eicr & FM10K_EIMR_PCA_FAULT) { > + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); > + if (err) > + goto error; > + switch (fault.type) { > + case PCA_NO_FAULT: > + estr = "PCA_NO_FAULT"; break; > + case PCA_UNMAPPED_ADDR: > + estr = "PCA_UNMAPPED_ADDR"; break; > + case PCA_BAD_QACCESS_PF: > + estr = "PCA_BAD_QACCESS_PF"; break; > + case PCA_BAD_QACCESS_VF: > + estr = "PCA_BAD_QACCESS_VF"; break; > + case PCA_MALICIOUS_REQ: > + estr = "PCA_MALICIOUS_REQ"; break; > + case PCA_POISONED_TLP: > + estr = "PCA_POISONED_TLP"; break; > + case PCA_TLP_ABORT: > + estr = "PCA_TLP_ABORT"; break; > + default: > + goto error; > + } > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > + estr, fault.func ? "VF" : "PF", fault.func, > + fault.address, fault.specinfo); > + } > + > + /* Process THI fault */ > + if (eicr & FM10K_EIMR_THI_FAULT) { > + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); > + if (err) > + goto error; > + switch (fault.type) { > + case THI_NO_FAULT: > + estr = "THI_NO_FAULT"; break; > + case THI_MAL_DIS_Q_FAULT: > + estr = "THI_MAL_DIS_Q_FAULT"; break; > + default: > + goto error; > + } > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > + estr, fault.func ? "VF" : "PF", fault.func, > + fault.address, fault.specinfo); > + } > + > + /* Process FUM fault */ > + if (eicr & FM10K_EIMR_FUM_FAULT) { > + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); > + if (err) > + goto error; > + switch (fault.type) { > + case FUM_NO_FAULT: > + estr = "FUM_NO_FAULT"; break; > + case FUM_UNMAPPED_ADDR: > + estr = "FUM_UNMAPPED_ADDR"; break; > + case FUM_POISONED_TLP: > + estr = "FUM_POISONED_TLP"; break; > + case FUM_BAD_VF_QACCESS: > + estr = "FUM_BAD_VF_QACCESS"; break; > + case FUM_ADD_DECODE_ERR: > + estr = "FUM_ADD_DECODE_ERR"; break; > + case FUM_RO_ERROR: > + estr = "FUM_RO_ERROR"; break; > + case FUM_QPRC_CRC_ERROR: > + estr = "FUM_QPRC_CRC_ERROR"; break; > + case FUM_CSR_TIMEOUT: > + estr = "FUM_CSR_TIMEOUT"; break; > + case FUM_INVALID_TYPE: > + estr = "FUM_INVALID_TYPE"; break; > + case FUM_INVALID_LENGTH: > + estr = "FUM_INVALID_LENGTH"; break; > + case FUM_INVALID_BE: > + estr = "FUM_INVALID_BE"; break; > + case FUM_INVALID_ALIGN: > + estr = "FUM_INVALID_ALIGN"; break; > + default: > + goto error; > + } > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > + estr, fault.func ? "VF" : "PF", fault.func, > + fault.address, fault.specinfo); > + } > + > + if (estr) > + return 0; > + return 0; > +error: > + PMD_LOG(ERR, "Failed to handle fault event."); > + return err; > +} > + > +/** > + * PF interrupt handler triggered by NIC for handling specific interrupt. > + * > + * @param handle > + * Pointer to interrupt handle. > + * @param param > + * The address of parameter (struct rte_eth_dev *) regsitered before. > + * > + * @return > + * void > + */ > +static void > +fm10k_dev_interrupt_handler_pf( > + __rte_unused struct rte_intr_handle *handle, > + void *param) > +{ > + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + uint32_t cause, status; > + > + if (hw->mac.type != fm10k_mac_pf) > + return; > + > + cause = FM10K_READ_REG(hw, FM10K_EICR); > + > + /* Handle PCI fault cases */ > + if (cause & FM10K_EICR_FAULT_MASK) { > + PMD_LOG(ERR, "INT: find fault!"); > + fm10k_dev_handle_fault(hw, cause); > + } > + > + /* Handle switch up/down */ > + if (cause & FM10K_EICR_SWITCHNOTREADY) > + PMD_LOG(ERR, "INT: Switch is not ready"); > + > + if (cause & FM10K_EICR_SWITCHREADY) > + PMD_LOG(ERR, "INT: Switch is ready"); > + > + /* Handle mailbox message */ > + fm10k_mbx_lock(hw); > + hw->mbx.ops.process(hw, &hw->mbx); > + fm10k_mbx_unlock(hw); > + > + /* Handle SRAM error */ > + if (cause & FM10K_EICR_SRAMERROR) { > + PMD_LOG(ERR, "INT: SRAM error on PEP"); > + > + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); > + /* Write to clear pending bits */ > + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); > + > + /* Todo: print out error message after shared code updates */ > + } > + > + /* Clear these 3 events if having any */ > + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | > + FM10K_EICR_SWITCHREADY; > + if (cause) > + FM10K_WRITE_REG(hw, FM10K_EICR, cause); > + > + /* Re-enable interrupt from device side */ > + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | > + FM10K_ITR_MASK_CLEAR); > + /* Re-enable interrupt from host side */ > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > +} > + > +/** > + * VF interrupt handler triggered by NIC for handling specific interrupt. > + * > + * @param handle > + * Pointer to interrupt handle. > + * @param param > + * The address of parameter (struct rte_eth_dev *) regsitered before. > + * > + * @return > + * void > + */ > +static void > +fm10k_dev_interrupt_handler_vf( > + __rte_unused struct rte_intr_handle *handle, > + void *param) > +{ > + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + > + if (hw->mac.type != fm10k_mac_vf) > + return; > + > + /* Handle mailbox message if lock is acquired */ > + fm10k_mbx_lock(hw); > + hw->mbx.ops.process(hw, &hw->mbx); > + fm10k_mbx_unlock(hw); > + > + /* Re-enable interrupt from device side */ > + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | > + FM10K_ITR_MASK_CLEAR); > + /* Re-enable interrupt from host side */ > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > +} > + > /* Mailbox message handler in VF */ > static const struct fm10k_msg_data fm10k_msgdata_vf[] = { > FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), > @@ -1503,6 +1753,21 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, > return -EIO; > } > > + /*PF/VF has different interrupt handling mechanism */ > + if (hw->mac.type == fm10k_mac_pf) { > + /* register callback func to eal lib */ > + rte_intr_callback_register(&(dev->pci_dev->intr_handle), > + fm10k_dev_interrupt_handler_pf, (void *)dev); > + > + /* enable MISC interrupt */ > + fm10k_dev_enable_intr_pf(dev); > + } else { /* VF */ > + rte_intr_callback_register(&(dev->pci_dev->intr_handle), > + fm10k_dev_interrupt_handler_vf, (void *)dev); > + > + fm10k_dev_enable_intr_vf(dev); > + } > + > /* > * Below function will trigger operations on mailbox, acquire lock to > * avoid race condition from interrupt handler. Operations on mailbox > @@ -1532,6 +1797,9 @@ eth_fm10k_dev_init(__rte_unused struct eth_driver *eth_drv, > > fm10k_mbx_unlock(hw); > > + /* enable uio intr after callback registered */ > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > + > return 0; > } > > -- > 1.7.7.6 > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function 2015-02-01 0:42 ` Neil Horman @ 2015-02-02 7:59 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-02 7:59 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hi Neil, > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Sunday, February 01, 2015 8:43 AM > To: Chen, Jing D > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt > handling function > > On Fri, Jan 30, 2015 at 01:07:32PM +0800, Chen Jing D(Mark) wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > 1. Add 2 interrupt handling functions, one for PF and one for VF. > > 2. Enable interrupt after completing initialization of NIC. > > > This seems to do way more than enable interrupt handling. Can you be a bit > more > desriptive here? OK, I'll try to add more description in the log. > Neil > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > lib/librte_pmd_fm10k/fm10k_ethdev.c | 268 > +++++++++++++++++++++++++++++++++++ > > 1 files changed, 268 insertions(+), 0 deletions(-) > > > > diff --git a/lib/librte_pmd_fm10k/fm10k_ethdev.c > b/lib/librte_pmd_fm10k/fm10k_ethdev.c > > index 40e3a2b..685fa8f 100644 > > --- a/lib/librte_pmd_fm10k/fm10k_ethdev.c > > +++ b/lib/librte_pmd_fm10k/fm10k_ethdev.c > > @@ -1325,6 +1325,256 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev > *dev, > > return 0; > > } > > > > +static void > > +fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; > > + > > + /* Bind all local non-queue interrupt to vector 0 */ > > + int_map |= 0; > > + > > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_Mailbox), > int_map); > > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_PCIeFault), > int_map); > > + FM10K_WRITE_REG(hw, > FM10K_INT_MAP(fm10k_int_SwitchUpDown), int_map); > > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SwitchEvent), > int_map); > > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_SRAM), > int_map); > > + FM10K_WRITE_REG(hw, FM10K_INT_MAP(fm10k_int_VFLR), > int_map); > > + > > + /* Enable misc causes */ > > + FM10K_WRITE_REG(hw, FM10K_EIMR, > FM10K_EIMR_ENABLE(PCA_FAULT) | > > + FM10K_EIMR_ENABLE(THI_FAULT) | > > + FM10K_EIMR_ENABLE(FUM_FAULT) | > > + FM10K_EIMR_ENABLE(MAILBOX) | > > + FM10K_EIMR_ENABLE(SWITCHREADY) | > > + FM10K_EIMR_ENABLE(SWITCHNOTREADY) | > > + FM10K_EIMR_ENABLE(SRAMERROR) | > > + FM10K_EIMR_ENABLE(VFLR)); > > + > > + /* Enable ITR 0 */ > > + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | > > + FM10K_ITR_MASK_CLEAR); > > + FM10K_WRITE_FLUSH(hw); > > +} > > + > > +static void > > +fm10k_dev_enable_intr_vf(struct rte_eth_dev *dev) > > +{ > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + uint32_t int_map = FM10K_INT_MAP_IMMEDIATE; > > + > > + /* Bind all local non-queue interrupt to vector 0 */ > > + int_map |= 0; > > + > > + /* Only INT 0 availiable, other 15 are reserved. */ > > + FM10K_WRITE_REG(hw, FM10K_VFINT_MAP, int_map); > > + > > + /* Enable ITR 0 */ > > + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | > > + FM10K_ITR_MASK_CLEAR); > > + FM10K_WRITE_FLUSH(hw); > > +} > > + > > +static int > > +fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) > > +{ > > + struct fm10k_fault fault; > > + int err; > > + const char *estr = "Unknown error"; > > + > > + /* Process PCA fault */ > > + if (eicr & FM10K_EIMR_PCA_FAULT) { > > + err = fm10k_get_fault(hw, FM10K_PCA_FAULT, &fault); > > + if (err) > > + goto error; > > + switch (fault.type) { > > + case PCA_NO_FAULT: > > + estr = "PCA_NO_FAULT"; break; > > + case PCA_UNMAPPED_ADDR: > > + estr = "PCA_UNMAPPED_ADDR"; break; > > + case PCA_BAD_QACCESS_PF: > > + estr = "PCA_BAD_QACCESS_PF"; break; > > + case PCA_BAD_QACCESS_VF: > > + estr = "PCA_BAD_QACCESS_VF"; break; > > + case PCA_MALICIOUS_REQ: > > + estr = "PCA_MALICIOUS_REQ"; break; > > + case PCA_POISONED_TLP: > > + estr = "PCA_POISONED_TLP"; break; > > + case PCA_TLP_ABORT: > > + estr = "PCA_TLP_ABORT"; break; > > + default: > > + goto error; > > + } > > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > > + estr, fault.func ? "VF" : "PF", fault.func, > > + fault.address, fault.specinfo); > > + } > > + > > + /* Process THI fault */ > > + if (eicr & FM10K_EIMR_THI_FAULT) { > > + err = fm10k_get_fault(hw, FM10K_THI_FAULT, &fault); > > + if (err) > > + goto error; > > + switch (fault.type) { > > + case THI_NO_FAULT: > > + estr = "THI_NO_FAULT"; break; > > + case THI_MAL_DIS_Q_FAULT: > > + estr = "THI_MAL_DIS_Q_FAULT"; break; > > + default: > > + goto error; > > + } > > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > > + estr, fault.func ? "VF" : "PF", fault.func, > > + fault.address, fault.specinfo); > > + } > > + > > + /* Process FUM fault */ > > + if (eicr & FM10K_EIMR_FUM_FAULT) { > > + err = fm10k_get_fault(hw, FM10K_FUM_FAULT, &fault); > > + if (err) > > + goto error; > > + switch (fault.type) { > > + case FUM_NO_FAULT: > > + estr = "FUM_NO_FAULT"; break; > > + case FUM_UNMAPPED_ADDR: > > + estr = "FUM_UNMAPPED_ADDR"; break; > > + case FUM_POISONED_TLP: > > + estr = "FUM_POISONED_TLP"; break; > > + case FUM_BAD_VF_QACCESS: > > + estr = "FUM_BAD_VF_QACCESS"; break; > > + case FUM_ADD_DECODE_ERR: > > + estr = "FUM_ADD_DECODE_ERR"; break; > > + case FUM_RO_ERROR: > > + estr = "FUM_RO_ERROR"; break; > > + case FUM_QPRC_CRC_ERROR: > > + estr = "FUM_QPRC_CRC_ERROR"; break; > > + case FUM_CSR_TIMEOUT: > > + estr = "FUM_CSR_TIMEOUT"; break; > > + case FUM_INVALID_TYPE: > > + estr = "FUM_INVALID_TYPE"; break; > > + case FUM_INVALID_LENGTH: > > + estr = "FUM_INVALID_LENGTH"; break; > > + case FUM_INVALID_BE: > > + estr = "FUM_INVALID_BE"; break; > > + case FUM_INVALID_ALIGN: > > + estr = "FUM_INVALID_ALIGN"; break; > > + default: > > + goto error; > > + } > > + PMD_LOG(ERR, "%s: %s(%d) Addr:0x%"PRIu64" Spec: 0x%x", > > + estr, fault.func ? "VF" : "PF", fault.func, > > + fault.address, fault.specinfo); > > + } > > + > > + if (estr) > > + return 0; > > + return 0; > > +error: > > + PMD_LOG(ERR, "Failed to handle fault event."); > > + return err; > > +} > > + > > +/** > > + * PF interrupt handler triggered by NIC for handling specific interrupt. > > + * > > + * @param handle > > + * Pointer to interrupt handle. > > + * @param param > > + * The address of parameter (struct rte_eth_dev *) regsitered before. > > + * > > + * @return > > + * void > > + */ > > +static void > > +fm10k_dev_interrupt_handler_pf( > > + __rte_unused struct rte_intr_handle *handle, > > + void *param) > > +{ > > + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + uint32_t cause, status; > > + > > + if (hw->mac.type != fm10k_mac_pf) > > + return; > > + > > + cause = FM10K_READ_REG(hw, FM10K_EICR); > > + > > + /* Handle PCI fault cases */ > > + if (cause & FM10K_EICR_FAULT_MASK) { > > + PMD_LOG(ERR, "INT: find fault!"); > > + fm10k_dev_handle_fault(hw, cause); > > + } > > + > > + /* Handle switch up/down */ > > + if (cause & FM10K_EICR_SWITCHNOTREADY) > > + PMD_LOG(ERR, "INT: Switch is not ready"); > > + > > + if (cause & FM10K_EICR_SWITCHREADY) > > + PMD_LOG(ERR, "INT: Switch is ready"); > > + > > + /* Handle mailbox message */ > > + fm10k_mbx_lock(hw); > > + hw->mbx.ops.process(hw, &hw->mbx); > > + fm10k_mbx_unlock(hw); > > + > > + /* Handle SRAM error */ > > + if (cause & FM10K_EICR_SRAMERROR) { > > + PMD_LOG(ERR, "INT: SRAM error on PEP"); > > + > > + status = FM10K_READ_REG(hw, FM10K_SRAM_IP); > > + /* Write to clear pending bits */ > > + FM10K_WRITE_REG(hw, FM10K_SRAM_IP, status); > > + > > + /* Todo: print out error message after shared code updates > */ > > + } > > + > > + /* Clear these 3 events if having any */ > > + cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX > | > > + FM10K_EICR_SWITCHREADY; > > + if (cause) > > + FM10K_WRITE_REG(hw, FM10K_EICR, cause); > > + > > + /* Re-enable interrupt from device side */ > > + FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | > > + FM10K_ITR_MASK_CLEAR); > > + /* Re-enable interrupt from host side */ > > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > > +} > > + > > +/** > > + * VF interrupt handler triggered by NIC for handling specific interrupt. > > + * > > + * @param handle > > + * Pointer to interrupt handle. > > + * @param param > > + * The address of parameter (struct rte_eth_dev *) regsitered before. > > + * > > + * @return > > + * void > > + */ > > +static void > > +fm10k_dev_interrupt_handler_vf( > > + __rte_unused struct rte_intr_handle *handle, > > + void *param) > > +{ > > + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; > > + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); > > + > > + if (hw->mac.type != fm10k_mac_vf) > > + return; > > + > > + /* Handle mailbox message if lock is acquired */ > > + fm10k_mbx_lock(hw); > > + hw->mbx.ops.process(hw, &hw->mbx); > > + fm10k_mbx_unlock(hw); > > + > > + /* Re-enable interrupt from device side */ > > + FM10K_WRITE_REG(hw, FM10K_VFITR(0), FM10K_ITR_AUTOMASK | > > + FM10K_ITR_MASK_CLEAR); > > + /* Re-enable interrupt from host side */ > > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > > +} > > + > > /* Mailbox message handler in VF */ > > static const struct fm10k_msg_data fm10k_msgdata_vf[] = { > > FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test), > > @@ -1503,6 +1753,21 @@ eth_fm10k_dev_init(__rte_unused struct > eth_driver *eth_drv, > > return -EIO; > > } > > > > + /*PF/VF has different interrupt handling mechanism */ > > + if (hw->mac.type == fm10k_mac_pf) { > > + /* register callback func to eal lib */ > > + rte_intr_callback_register(&(dev->pci_dev->intr_handle), > > + fm10k_dev_interrupt_handler_pf, (void *)dev); > > + > > + /* enable MISC interrupt */ > > + fm10k_dev_enable_intr_pf(dev); > > + } else { /* VF */ > > + rte_intr_callback_register(&(dev->pci_dev->intr_handle), > > + fm10k_dev_interrupt_handler_vf, (void *)dev); > > + > > + fm10k_dev_enable_intr_vf(dev); > > + } > > + > > /* > > * Below function will trigger operations on mailbox, acquire lock to > > * avoid race condition from interrupt handler. Operations on mailbox > > @@ -1532,6 +1797,9 @@ eth_fm10k_dev_init(__rte_unused struct > eth_driver *eth_drv, > > > > fm10k_mbx_unlock(hw); > > > > + /* enable uio intr after callback registered */ > > + rte_intr_enable(&(dev->pci_dev->intr_handle)); > > + > > return 0; > > } > > > > -- > > 1.7.7.6 > > > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 17/18] Change lib/Makefile to add fm10k driver into compile list. 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (15 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link Chen Jing D(Mark) 2015-01-30 21:26 ` [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Neil Horman 18 siblings, 0 replies; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- lib/Makefile | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/lib/Makefile b/lib/Makefile index 0ffc982..b1f3860 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -43,6 +43,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e +DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += librte_pmd_fm10k DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (16 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 17/18] Change lib/Makefile to add fm10k driver into compile list Chen Jing D(Mark) @ 2015-01-30 5:07 ` Chen Jing D(Mark) 2015-02-01 0:50 ` Neil Horman 2015-01-30 21:26 ` [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Neil Horman 18 siblings, 1 reply; 147+ messages in thread From: Chen Jing D(Mark) @ 2015-01-30 5:07 UTC (permalink / raw) To: dev From: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> --- mk/rte.app.mk | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 4294d9a..87d8763 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) +LDLIBS += -lrte_pmd_fm10k +endif + ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) LDLIBS += -lrte_pmd_ixgbe endif -- 1.7.7.6 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link 2015-01-30 5:07 ` [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link Chen Jing D(Mark) @ 2015-02-01 0:50 ` Neil Horman 2015-02-02 8:10 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-02-01 0:50 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:34PM +0800, Chen Jing D(Mark) wrote: > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > --- > mk/rte.app.mk | 4 ++++ > 1 files changed, 4 insertions(+), 0 deletions(-) > > diff --git a/mk/rte.app.mk b/mk/rte.app.mk > index 4294d9a..87d8763 100644 > --- a/mk/rte.app.mk > +++ b/mk/rte.app.mk > @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) > LDLIBS += -lrte_pmd_i40e > endif > > +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) > +LDLIBS += -lrte_pmd_fm10k > +endif > + > ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) > LDLIBS += -lrte_pmd_ixgbe > endif > -- > 1.7.7.6 > > This patch should be merged with patch 17, and patch 2, and placed at the end of your series to avoid a FTBFS issue Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link 2015-02-01 0:50 ` Neil Horman @ 2015-02-02 8:10 ` Chen, Jing D 0 siblings, 0 replies; 147+ messages in thread From: Chen, Jing D @ 2015-02-02 8:10 UTC (permalink / raw) To: Neil Horman; +Cc: dev Hi Neil, > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Sunday, February 01, 2015 8:51 AM > To: Chen, Jing D > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k > lib into link > > On Fri, Jan 30, 2015 at 01:07:34PM +0800, Chen Jing D(Mark) wrote: > > From: Jeff Shaw <jeffrey.b.shaw@intel.com> > > > > Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com> > > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com> > > --- > > mk/rte.app.mk | 4 ++++ > > 1 files changed, 4 insertions(+), 0 deletions(-) > > > > diff --git a/mk/rte.app.mk b/mk/rte.app.mk > > index 4294d9a..87d8763 100644 > > --- a/mk/rte.app.mk > > +++ b/mk/rte.app.mk > > @@ -211,6 +211,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) > > LDLIBS += -lrte_pmd_i40e > > endif > > > > +ifeq ($(CONFIG_RTE_LIBRTE_FM10K_PMD),y) > > +LDLIBS += -lrte_pmd_fm10k > > +endif > > + > > ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_PMD),y) > > LDLIBS += -lrte_pmd_ixgbe > > endif > > -- > > 1.7.7.6 > > > > > This patch should be merged with patch 17, and patch 2, and placed at the > end of > your series to avoid a FTBFS issue My rationale is to make every single patch not to break the compile. So, I'd like to add the binary library into compile and link in last 2 patches, after all the actual code are patched. For Patch 2, I think you are right, maybe a better way is to move it as patch "16". But I'm not sure whether I should merge these 3 together. You know, somebody may not happy to see the changes in different directory to appear in single patch. > Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) ` (17 preceding siblings ...) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link Chen Jing D(Mark) @ 2015-01-30 21:26 ` Neil Horman 2015-01-30 21:46 ` Jeff Shaw 18 siblings, 1 reply; 147+ messages in thread From: Neil Horman @ 2015-01-30 21:26 UTC (permalink / raw) To: Chen Jing D(Mark); +Cc: dev On Fri, Jan 30, 2015 at 01:07:16PM +0800, Chen Jing D(Mark) wrote: > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > The patch set add poll mode driver for the host interface of Intel > Red Rock Canyon silicon, which integrates NIC and switch functionalities. > The patch set include below features: > > 1. Basic RX/TX functions for PF/VF. > 2. Interrupt handling mechanism for PF/VF. > 3. per queue start/stop functions for PF/VF. > 4. Mailbox handling between PF/VF and PF/Switch Manager. > 5. Receive Side Scaling (RSS) for PF/VF. > 6. Scatter receive function for PF/VF. > 7. reta update/query for PF/VF. > 8. VLAN filter set for PF. > 9. Link status query for PF/VF. > > Jeff Shaw (18): > fm10k: add base driver > Change config/ files to add macros for fm10k > fm10k: Add empty fm10k files > fm10k: add fm10k device id > fm10k: Add code to register fm10k pmd PF driver > fm10k: add reta update/requery functions > fm10k: add rx_queue_setup/release function > fm10k: add tx_queue_setup/release function > fm10k: add RX/TX single queue start/stop function > fm10k: add dev start/stop functions > fm10k: add receive and tranmit function > fm10k: add PF RSS support > fm10k: Add scatter receive function > fm10k: add function to set vlan > fm10k: Add SRIOV-VF support > fm10k: add PF and VF interrupt handling function > Change lib/Makefile to add fm10k driver into compile list. > Change mk/rte.app.mk to add fm10k lib into link > > config/common_bsdapp | 9 + > config/common_linuxapp | 9 + > lib/Makefile | 1 + > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > lib/librte_pmd_fm10k/Makefile | 96 + > lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ > lib/librte_pmd_fm10k/SHARED/fm10k_api.h | 60 + > lib/librte_pmd_fm10k/SHARED/fm10k_common.c | 573 ++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_common.h | 52 + > lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c | 2186 +++++++++++++++++++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h | 329 ++++ > lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h | 116 ++ > lib/librte_pmd_fm10k/SHARED/fm10k_pf.c | 1877 +++++++++++++++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_pf.h | 152 ++ > lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c | 914 ++++++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h | 199 ++ > lib/librte_pmd_fm10k/SHARED/fm10k_type.h | 925 ++++++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_vf.c | 586 ++++++ > lib/librte_pmd_fm10k/SHARED/fm10k_vf.h | 91 + > lib/librte_pmd_fm10k/fm10k.h | 293 +++ > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1846 +++++++++++++++++++ > lib/librte_pmd_fm10k/fm10k_logs.h | 66 + > lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ > mk/rte.app.mk | 4 + > 24 files changed, 11160 insertions(+), 0 deletions(-) > create mode 100644 lib/librte_pmd_fm10k/Makefile > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_type.h > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.c > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.h > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > Why is there a SHARED directory in the driver? Are there other drivers that use the shared fm10k code? Neil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver 2015-01-30 21:26 ` [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Neil Horman @ 2015-01-30 21:46 ` Jeff Shaw 2015-01-30 22:19 ` Thomas Monjalon 0 siblings, 1 reply; 147+ messages in thread From: Jeff Shaw @ 2015-01-30 21:46 UTC (permalink / raw) To: Neil Horman; +Cc: dev On Fri, Jan 30, 2015 at 04:26:33PM -0500, Neil Horman wrote: > On Fri, Jan 30, 2015 at 01:07:16PM +0800, Chen Jing D(Mark) wrote: > > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > > > The patch set add poll mode driver for the host interface of Intel > > Red Rock Canyon silicon, which integrates NIC and switch functionalities. > > The patch set include below features: > > > > 1. Basic RX/TX functions for PF/VF. > > 2. Interrupt handling mechanism for PF/VF. > > 3. per queue start/stop functions for PF/VF. > > 4. Mailbox handling between PF/VF and PF/Switch Manager. > > 5. Receive Side Scaling (RSS) for PF/VF. > > 6. Scatter receive function for PF/VF. > > 7. reta update/query for PF/VF. > > 8. VLAN filter set for PF. > > 9. Link status query for PF/VF. > > > > Jeff Shaw (18): > > fm10k: add base driver > > Change config/ files to add macros for fm10k > > fm10k: Add empty fm10k files > > fm10k: add fm10k device id > > fm10k: Add code to register fm10k pmd PF driver > > fm10k: add reta update/requery functions > > fm10k: add rx_queue_setup/release function > > fm10k: add tx_queue_setup/release function > > fm10k: add RX/TX single queue start/stop function > > fm10k: add dev start/stop functions > > fm10k: add receive and tranmit function > > fm10k: add PF RSS support > > fm10k: Add scatter receive function > > fm10k: add function to set vlan > > fm10k: Add SRIOV-VF support > > fm10k: add PF and VF interrupt handling function > > Change lib/Makefile to add fm10k driver into compile list. > > Change mk/rte.app.mk to add fm10k lib into link > > > > config/common_bsdapp | 9 + > > config/common_linuxapp | 9 + > > lib/Makefile | 1 + > > lib/librte_eal/common/include/rte_pci_dev_ids.h | 22 + > > lib/librte_pmd_fm10k/Makefile | 96 + > > lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_api.h | 60 + > > lib/librte_pmd_fm10k/SHARED/fm10k_common.c | 573 ++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_common.h | 52 + > > lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c | 2186 +++++++++++++++++++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h | 329 ++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h | 116 ++ > > lib/librte_pmd_fm10k/SHARED/fm10k_pf.c | 1877 +++++++++++++++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_pf.h | 152 ++ > > lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c | 914 ++++++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h | 199 ++ > > lib/librte_pmd_fm10k/SHARED/fm10k_type.h | 925 ++++++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_vf.c | 586 ++++++ > > lib/librte_pmd_fm10k/SHARED/fm10k_vf.h | 91 + > > lib/librte_pmd_fm10k/fm10k.h | 293 +++ > > lib/librte_pmd_fm10k/fm10k_ethdev.c | 1846 +++++++++++++++++++ > > lib/librte_pmd_fm10k/fm10k_logs.h | 66 + > > lib/librte_pmd_fm10k/fm10k_rxtx.c | 427 +++++ > > mk/rte.app.mk | 4 + > > 24 files changed, 11160 insertions(+), 0 deletions(-) > > create mode 100644 lib/librte_pmd_fm10k/Makefile > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_api.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_common.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_mbx.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_osdep.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_pf.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_tlv.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_type.h > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.c > > create mode 100644 lib/librte_pmd_fm10k/SHARED/fm10k_vf.h > > create mode 100644 lib/librte_pmd_fm10k/fm10k.h > > create mode 100644 lib/librte_pmd_fm10k/fm10k_ethdev.c > > create mode 100644 lib/librte_pmd_fm10k/fm10k_logs.h > > create mode 100644 lib/librte_pmd_fm10k/fm10k_rxtx.c > > > > Why is there a SHARED directory in the driver? Are there other drivers that use > the shared fm10k code? No, the other poll-mode drivers do not use the shared fm10k code. The directory is similar to the 'ixgbe' and 'i40e' directories in their respective PMDs, only that it is named 'SHARED' for the fm10k driver. -Jeff > > Neil > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver 2015-01-30 21:46 ` Jeff Shaw @ 2015-01-30 22:19 ` Thomas Monjalon 2015-02-02 2:59 ` Chen, Jing D 0 siblings, 1 reply; 147+ messages in thread From: Thomas Monjalon @ 2015-01-30 22:19 UTC (permalink / raw) To: Jeff Shaw; +Cc: dev 2015-01-30 13:46, Jeff Shaw: > On Fri, Jan 30, 2015 at 04:26:33PM -0500, Neil Horman wrote: > > On Fri, Jan 30, 2015 at 01:07:16PM +0800, Chen Jing D(Mark) wrote: > > > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > > Jeff Shaw (18): > > > fm10k: add base driver [...] > > > lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ [...] > > > > Why is there a SHARED directory in the driver? Are there other drivers that use > > the shared fm10k code? > > No, the other poll-mode drivers do not use the shared fm10k code. The > directory is similar to the 'ixgbe' and 'i40e' directories in their > respective PMDs, only that it is named 'SHARED' for the fm10k driver. So shared is a bad name in the context of DPDK. Inside Intel, it can be understood that you share it between projects, but in DPDK, it's only a base driver. -- Thomas ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver 2015-01-30 22:19 ` Thomas Monjalon @ 2015-02-02 2:59 ` Chen, Jing D 2015-02-02 8:19 ` Thomas Monjalon 0 siblings, 1 reply; 147+ messages in thread From: Chen, Jing D @ 2015-02-02 2:59 UTC (permalink / raw) To: Thomas Monjalon, Shaw, Jeffrey B; +Cc: dev Hi, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon > Sent: Saturday, January 31, 2015 6:19 AM > To: Shaw, Jeffrey B > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd > driver > > 2015-01-30 13:46, Jeff Shaw: > > On Fri, Jan 30, 2015 at 04:26:33PM -0500, Neil Horman wrote: > > > On Fri, Jan 30, 2015 at 01:07:16PM +0800, Chen Jing D(Mark) wrote: > > > > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > > > Jeff Shaw (18): > > > > fm10k: add base driver > [...] > > > > lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ > [...] > > > > > > Why is there a SHARED directory in the driver? Are there other drivers > that use > > > the shared fm10k code? > > > > No, the other poll-mode drivers do not use the shared fm10k code. The > > directory is similar to the 'ixgbe' and 'i40e' directories in their > > respective PMDs, only that it is named 'SHARED' for the fm10k driver. > > So shared is a bad name in the context of DPDK. > Inside Intel, it can be understood that you share it between projects, > but in DPDK, it's only a base driver. > OK, I'll change "SHARED" to "fm10k". > -- > Thomas ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver 2015-02-02 2:59 ` Chen, Jing D @ 2015-02-02 8:19 ` Thomas Monjalon 0 siblings, 0 replies; 147+ messages in thread From: Thomas Monjalon @ 2015-02-02 8:19 UTC (permalink / raw) To: Chen, Jing D; +Cc: dev 2015-02-02 02:59, Chen, Jing D: > Hi, > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon > > 2015-01-30 13:46, Jeff Shaw: > > > On Fri, Jan 30, 2015 at 04:26:33PM -0500, Neil Horman wrote: > > > > On Fri, Jan 30, 2015 at 01:07:16PM +0800, Chen Jing D(Mark) wrote: > > > > > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com> > > > > > Jeff Shaw (18): > > > > > fm10k: add base driver > > [...] > > > > > lib/librte_pmd_fm10k/SHARED/fm10k_api.c | 327 ++++ > > [...] > > > > > > > > Why is there a SHARED directory in the driver? Are there other drivers > > that use > > > > the shared fm10k code? > > > > > > No, the other poll-mode drivers do not use the shared fm10k code. The > > > directory is similar to the 'ixgbe' and 'i40e' directories in their > > > respective PMDs, only that it is named 'SHARED' for the fm10k driver. > > > > So shared is a bad name in the context of DPDK. > > Inside Intel, it can be understood that you share it between projects, > > but in DPDK, it's only a base driver. > > > > OK, I'll change "SHARED" to "fm10k". I think that "base" would be more appropriate: fm10K/base instead of fm10k/fm10k. -- Thomas \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\00x1, I40E_RCU_SP_PST_RSC_HASH_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP_PST_RSC_HASH_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP_PST_RSC_HASH_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_CFG_RME_SHIFT 12 +#define I40E_RCU_SP_PST_RSC_HASH_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_CFG_RME_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_CFG_RM_SHIFT 16 +#define I40E_RCU_SP_PST_RSC_HASH_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP_PST_RSC_HASH_CFG_RM_SHIFT) + +#define I40E_RCU_SP_PST_RSC_HASH_STATUS 0x00269B14 /* Reset: POR */ +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP_PST_RSC_HASH_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP_PST_RSC_HASH_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG 0x002698C4 /* Reset: POR */ +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RME_SHIFT 12 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RME_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RM_SHIFT 16 +#define I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP_SWR_VSI_CNTXT_CFG_RM_SHIFT) + +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS 0x002698CC /* Reset: POR */ +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP_SWR_VSI_CNTXT_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP16KB_CFG 0x002698D4 /* Reset: POR */ +#define I40E_RCU_SP16KB_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP16KB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP16KB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP16KB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP16KB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP16KB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP16KB_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP16KB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP16KB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP16KB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP16KB_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP16KB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP16KB_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP16KB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP16KB_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP16KB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP16KB_CFG_RME_SHIFT 12 +#define I40E_RCU_SP16KB_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_CFG_RME_SHIFT) +#define I40E_RCU_SP16KB_CFG_RM_SHIFT 16 +#define I40E_RCU_SP16KB_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP16KB_CFG_RM_SHIFT) + +#define I40E_RCU_SP16KB_REP_CFG 0x00269AF4 /* Reset: POR */ +#define I40E_RCU_SP16KB_REP_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP16KB_REP_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP16KB_REP_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP16KB_REP_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP16KB_REP_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP16KB_REP_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP16KB_REP_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_RME_SHIFT 12 +#define I40E_RCU_SP16KB_REP_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_CFG_RME_SHIFT) +#define I40E_RCU_SP16KB_REP_CFG_RM_SHIFT 16 +#define I40E_RCU_SP16KB_REP_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP16KB_REP_CFG_RM_SHIFT) + +#define I40E_RCU_SP16KB_REP_STATUS 0x00269B24 /* Reset: POR */ +#define I40E_RCU_SP16KB_REP_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP16KB_REP_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP16KB_REP_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP16KB_REP_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP16KB_REP_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP16KB_REP_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP16KB_REP_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP16KB_REP_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_REP_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP16KB_STATUS 0x002698DC /* Reset: POR */ +#define I40E_RCU_SP16KB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP16KB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP16KB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP16KB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP16KB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP16KB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP16KB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP16KB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP16KB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP1KB_CFG 0x002698E4 /* Reset: POR */ +#define I40E_RCU_SP1KB_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP1KB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP1KB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP1KB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP1KB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP1KB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP1KB_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP1KB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP1KB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP1KB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP1KB_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP1KB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP1KB_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP1KB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP1KB_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP1KB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP1KB_CFG_RME_SHIFT 12 +#define I40E_RCU_SP1KB_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_CFG_RME_SHIFT) +#define I40E_RCU_SP1KB_CFG_RM_SHIFT 16 +#define I40E_RCU_SP1KB_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP1KB_CFG_RM_SHIFT) + +#define I40E_RCU_SP1KB_STATUS 0x002698EC /* Reset: POR */ +#define I40E_RCU_SP1KB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP1KB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP1KB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP1KB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP1KB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP1KB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP1KB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP1KB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP1KB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP256B_CFG 0x002698F4 /* Reset: POR */ +#define I40E_RCU_SP256B_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP256B_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP256B_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP256B_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP256B_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP256B_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP256B_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP256B_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP256B_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP256B_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP256B_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP256B_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP256B_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP256B_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP256B_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP256B_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP256B_CFG_RME_SHIFT 12 +#define I40E_RCU_SP256B_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP256B_CFG_RME_SHIFT) +#define I40E_RCU_SP256B_CFG_RM_SHIFT 16 +#define I40E_RCU_SP256B_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP256B_CFG_RM_SHIFT) + +#define I40E_RCU_SP256B_STATUS 0x002698FC /* Reset: POR */ +#define I40E_RCU_SP256B_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP256B_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP256B_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP256B_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP256B_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP256B_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP256B_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP256B_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP256B_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP256B_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP256B_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP256B_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP2KB_CFG 0x00269904 /* Reset: POR */ +#define I40E_RCU_SP2KB_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP2KB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP2KB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP2KB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP2KB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP2KB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP2KB_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP2KB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP2KB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP2KB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP2KB_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP2KB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP2KB_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP2KB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP2KB_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP2KB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP2KB_CFG_RME_SHIFT 12 +#define I40E_RCU_SP2KB_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_CFG_RME_SHIFT) +#define I40E_RCU_SP2KB_CFG_RM_SHIFT 16 +#define I40E_RCU_SP2KB_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP2KB_CFG_RM_SHIFT) + +#define I40E_RCU_SP2KB_STATUS 0x0026990C /* Reset: POR */ +#define I40E_RCU_SP2KB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP2KB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP2KB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP2KB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP2KB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP2KB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP2KB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP2KB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP2KB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP4KB_CFG 0x00269914 /* Reset: POR */ +#define I40E_RCU_SP4KB_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP4KB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP4KB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP4KB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP4KB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP4KB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP4KB_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP4KB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP4KB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP4KB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP4KB_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP4KB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP4KB_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP4KB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP4KB_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP4KB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP4KB_CFG_RME_SHIFT 12 +#define I40E_RCU_SP4KB_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_CFG_RME_SHIFT) +#define I40E_RCU_SP4KB_CFG_RM_SHIFT 16 +#define I40E_RCU_SP4KB_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP4KB_CFG_RM_SHIFT) + +#define I40E_RCU_SP4KB_STATUS 0x0026991C /* Reset: POR */ +#define I40E_RCU_SP4KB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP4KB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP4KB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP4KB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP4KB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP4KB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP4KB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP4KB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP4KB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SP8KB_CFG 0x00269924 /* Reset: POR */ +#define I40E_RCU_SP8KB_CFG_ECC_EN_SHIFT 0 +#define I40E_RCU_SP8KB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_ECC_EN_SHIFT) +#define I40E_RCU_SP8KB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RCU_SP8KB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RCU_SP8KB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RCU_SP8KB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RCU_SP8KB_CFG_LS_FORCE_SHIFT 3 +#define I40E_RCU_SP8KB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_LS_FORCE_SHIFT) +#define I40E_RCU_SP8KB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RCU_SP8KB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_LS_BYPASS_SHIFT) +#define I40E_RCU_SP8KB_CFG_MASK_INT_SHIFT 5 +#define I40E_RCU_SP8KB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_MASK_INT_SHIFT) +#define I40E_RCU_SP8KB_CFG_FIX_CNT_SHIFT 8 +#define I40E_RCU_SP8KB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_FIX_CNT_SHIFT) +#define I40E_RCU_SP8KB_CFG_ERR_CNT_SHIFT 9 +#define I40E_RCU_SP8KB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_ERR_CNT_SHIFT) +#define I40E_RCU_SP8KB_CFG_RME_SHIFT 12 +#define I40E_RCU_SP8KB_CFG_RME_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_CFG_RME_SHIFT) +#define I40E_RCU_SP8KB_CFG_RM_SHIFT 16 +#define I40E_RCU_SP8KB_CFG_RM_MASK I40E_MASK(0xF, I40E_RCU_SP8KB_CFG_RM_SHIFT) + +#define I40E_RCU_SP8KB_STATUS 0x0026992C /* Reset: POR */ +#define I40E_RCU_SP8KB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RCU_SP8KB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_STATUS_ECC_ERR_SHIFT) +#define I40E_RCU_SP8KB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RCU_SP8KB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_STATUS_ECC_FIX_SHIFT) +#define I40E_RCU_SP8KB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RCU_SP8KB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_STATUS_INIT_DONE_SHIFT) +#define I40E_RCU_SP8KB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RCU_SP8KB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RCU_SP8KB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RCU_SWR_ECC_COR_ERR 0x00269934 /* Reset: POR */ +#define I40E_RCU_SWR_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_RCU_SWR_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RCU_SWR_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_RCU_SWR_ECC_UNCOR_ERR 0x0026993C /* Reset: POR */ +#define I40E_RCU_SWR_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_RCU_SWR_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RCU_SWR_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_RDPU_ECC_COR_ERR 0x00051080 /* Reset: POR */ +#define I40E_RDPU_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_RDPU_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RDPU_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_RDPU_ECC_UNCOR_ERR 0x0005107C /* Reset: POR */ +#define I40E_RDPU_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_RDPU_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RDPU_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_RDPU_VSI_LY2_STRIP_CFG 0x00051074 /* Reset: POR */ +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_EN_SHIFT 0 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_EN_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_LS_FORCE_SHIFT 3 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_LS_FORCE_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_LS_BYPASS_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_MASK_INT_SHIFT 5 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_MASK_INT_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_FIX_CNT_SHIFT 8 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_FIX_CNT_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ERR_CNT_SHIFT 9 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_ERR_CNT_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_RME_SHIFT 12 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_RME_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_CFG_RME_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_CFG_RM_SHIFT 16 +#define I40E_RDPU_VSI_LY2_STRIP_CFG_RM_MASK I40E_MASK(0xF, I40E_RDPU_VSI_LY2_STRIP_CFG_RM_SHIFT) + +#define I40E_RDPU_VSI_LY2_STRIP_STATUS 0x00051078 /* Reset: POR */ +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_ERR_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_STATUS_ECC_FIX_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_STATUS_INIT_DONE_SHIFT) +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RDPU_VSI_LY2_STRIP_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RDPU_VSI_LY2_STRIP_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_ATTR_FIFO_CFG 0x0012A52C /* Reset: POR */ +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_ATTR_FIFO_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_ATTR_FIFO_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_ATTR_FIFO_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_ATTR_FIFO_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_ATTR_FIFO_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_RME_SHIFT 12 +#define I40E_RLAN_ATTR_FIFO_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_CFG_RME_SHIFT) +#define I40E_RLAN_ATTR_FIFO_CFG_RM_SHIFT 16 +#define I40E_RLAN_ATTR_FIFO_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_ATTR_FIFO_CFG_RM_SHIFT) + +#define I40E_RLAN_ATTR_FIFO_STATUS 0x0012A530 /* Reset: POR */ +#define I40E_RLAN_ATTR_FIFO_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_ATTR_FIFO_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_ATTR_FIFO_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_ATTR_FIFO_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_ATTR_FIFO_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_ATTR_FIFO_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_ATTR_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_ATTR_FIFO_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_ATTR_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_CCH_CFG 0x0012A514 /* Reset: POR */ +#define I40E_RLAN_CCH_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_CCH_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_CCH_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_CCH_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_CCH_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_CCH_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_CCH_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_CCH_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_CCH_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_CCH_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_CCH_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_CCH_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_CCH_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_CCH_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_CCH_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_CCH_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_CCH_CFG_RME_SHIFT 12 +#define I40E_RLAN_CCH_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_CCH_CFG_RME_SHIFT) +#define I40E_RLAN_CCH_CFG_RM_SHIFT 16 +#define I40E_RLAN_CCH_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_CCH_CFG_RM_SHIFT) + +#define I40E_RLAN_CCH_STATUS 0x0012A518 /* Reset: POR */ +#define I40E_RLAN_CCH_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_CCH_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_CCH_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_CCH_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_CCH_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_CCH_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_CCH_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_CCH_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_CCH_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_CCH_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_CCH_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_CCH_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_CMD_FIFO_CFG 0x0012A534 /* Reset: POR */ +#define I40E_RLAN_CMD_FIFO_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_CMD_FIFO_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_CMD_FIFO_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_CMD_FIFO_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_CMD_FIFO_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_CMD_FIFO_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_CMD_FIFO_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_RME_SHIFT 12 +#define I40E_RLAN_CMD_FIFO_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_CFG_RME_SHIFT) +#define I40E_RLAN_CMD_FIFO_CFG_RM_SHIFT 16 +#define I40E_RLAN_CMD_FIFO_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_CMD_FIFO_CFG_RM_SHIFT) + +#define I40E_RLAN_CMD_FIFO_STATUS 0x0012A538 /* Reset: POR */ +#define I40E_RLAN_CMD_FIFO_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_CMD_FIFO_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_CMD_FIFO_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_CMD_FIFO_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_CMD_FIFO_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_CMD_FIFO_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_CMD_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_CMD_FIFO_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_CMD_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_DCH_LINE_ATTR_CFG 0x0012A51C /* Reset: POR */ +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_RME_SHIFT 12 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_CFG_RME_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_CFG_RM_SHIFT 16 +#define I40E_RLAN_DCH_LINE_ATTR_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_DCH_LINE_ATTR_CFG_RM_SHIFT) + +#define I40E_RLAN_DCH_LINE_ATTR_STATUS 0x0012A520 /* Reset: POR */ +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_DCH_LINE_ATTR_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DCH_LINE_ATTR_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_DSCR_CH_BNK_CFG 0x0012A544 /* Reset: POR */ +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_DSCR_CH_BNK_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_DSCR_CH_BNK_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_DSCR_CH_BNK_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_DSCR_CH_BNK_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_DSCR_CH_BNK_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_RME_SHIFT 12 +#define I40E_RLAN_DSCR_CH_BNK_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_CFG_RME_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_CFG_RM_SHIFT 16 +#define I40E_RLAN_DSCR_CH_BNK_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_DSCR_CH_BNK_CFG_RM_SHIFT) + +#define I40E_RLAN_DSCR_CH_BNK_STATUS 0x0012A548 /* Reset: POR */ +#define I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_DSCR_CH_BNK_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_DSCR_CH_BNK_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_DSCR_CH_BNK_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_CH_BNK_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_DSCR_REQ_FIFO_CFG 0x0012A524 /* Reset: POR */ +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_RME_SHIFT 12 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_CFG_RME_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_RM_SHIFT 16 +#define I40E_RLAN_DSCR_REQ_FIFO_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_DSCR_REQ_FIFO_CFG_RM_SHIFT) + +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS 0x0012A528 /* Reset: POR */ +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_DSCR_REQ_FIFO_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_DSCR_REQ_FIFO_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RLAN_ECC_COR_ERR 0x0012A550 /* Reset: POR */ +#define I40E_RLAN_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_RLAN_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RLAN_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_RLAN_ECC_UNCOR_ERR 0x0012A54C /* Reset: POR */ +#define I40E_RLAN_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_RLAN_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RLAN_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_RLAN_TAILS_CFG 0x0012A53C /* Reset: POR */ +#define I40E_RLAN_TAILS_CFG_ECC_EN_SHIFT 0 +#define I40E_RLAN_TAILS_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_ECC_EN_SHIFT) +#define I40E_RLAN_TAILS_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RLAN_TAILS_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RLAN_TAILS_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RLAN_TAILS_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RLAN_TAILS_CFG_LS_FORCE_SHIFT 3 +#define I40E_RLAN_TAILS_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_LS_FORCE_SHIFT) +#define I40E_RLAN_TAILS_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RLAN_TAILS_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_LS_BYPASS_SHIFT) +#define I40E_RLAN_TAILS_CFG_MASK_INT_SHIFT 5 +#define I40E_RLAN_TAILS_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_MASK_INT_SHIFT) +#define I40E_RLAN_TAILS_CFG_FIX_CNT_SHIFT 8 +#define I40E_RLAN_TAILS_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_FIX_CNT_SHIFT) +#define I40E_RLAN_TAILS_CFG_ERR_CNT_SHIFT 9 +#define I40E_RLAN_TAILS_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_ERR_CNT_SHIFT) +#define I40E_RLAN_TAILS_CFG_RME_SHIFT 12 +#define I40E_RLAN_TAILS_CFG_RME_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_CFG_RME_SHIFT) +#define I40E_RLAN_TAILS_CFG_RM_SHIFT 16 +#define I40E_RLAN_TAILS_CFG_RM_MASK I40E_MASK(0xF, I40E_RLAN_TAILS_CFG_RM_SHIFT) + +#define I40E_RLAN_TAILS_STATUS 0x0012A540 /* Reset: POR */ +#define I40E_RLAN_TAILS_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RLAN_TAILS_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_STATUS_ECC_ERR_SHIFT) +#define I40E_RLAN_TAILS_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RLAN_TAILS_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_STATUS_ECC_FIX_SHIFT) +#define I40E_RLAN_TAILS_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RLAN_TAILS_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_STATUS_INIT_DONE_SHIFT) +#define I40E_RLAN_TAILS_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RLAN_TAILS_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RLAN_TAILS_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_BACK_PRS_STAT 0x000AC948 /* Reset: CORER */ +#define I40E_RPB_BACK_PRS_STAT_PPRS_0_BP_SHIFT 0 +#define I40E_RPB_BACK_PRS_STAT_PPRS_0_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PPRS_0_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PPRS_1_BP_SHIFT 1 +#define I40E_RPB_BACK_PRS_STAT_PPRS_1_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PPRS_1_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PPRS_2_BP_SHIFT 2 +#define I40E_RPB_BACK_PRS_STAT_PPRS_2_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PPRS_2_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PPRS_3_BP_SHIFT 3 +#define I40E_RPB_BACK_PRS_STAT_PPRS_3_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PPRS_3_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_STATUS_BP_SHIFT 4 +#define I40E_RPB_BACK_PRS_STAT_STATUS_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_STATUS_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_RCU_BP_SHIFT 8 +#define I40E_RPB_BACK_PRS_STAT_RCU_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_RCU_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PE0_BP_SHIFT 9 +#define I40E_RPB_BACK_PRS_STAT_PE0_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PE0_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PE1_BP_SHIFT 10 +#define I40E_RPB_BACK_PRS_STAT_PE1_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PE1_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_RDPU_BP_SHIFT 11 +#define I40E_RPB_BACK_PRS_STAT_RDPU_BP_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_RDPU_BP_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PORT_0_FC_SHIFT 12 +#define I40E_RPB_BACK_PRS_STAT_PORT_0_FC_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PORT_0_FC_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PORT_1_FC_SHIFT 13 +#define I40E_RPB_BACK_PRS_STAT_PORT_1_FC_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PORT_1_FC_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PORT_2_FC_SHIFT 14 +#define I40E_RPB_BACK_PRS_STAT_PORT_2_FC_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PORT_2_FC_SHIFT) +#define I40E_RPB_BACK_PRS_STAT_PORT_3_FC_SHIFT 15 +#define I40E_RPB_BACK_PRS_STAT_PORT_3_FC_MASK I40E_MASK(0x1, I40E_RPB_BACK_PRS_STAT_PORT_3_FC_SHIFT) + +#define I40E_RPB_CC_CNT_MEM_CFG 0x000AC860 /* Reset: POR */ +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_CC_CNT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_CC_CNT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_CC_CNT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_CC_CNT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_CC_CNT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_CC_CNT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_CFG_RME_SHIFT) +#define I40E_RPB_CC_CNT_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_CC_CNT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_CC_CNT_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_CC_CNT_MEM_STATUS 0x000AC864 /* Reset: POR */ +#define I40E_RPB_CC_CNT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_CC_CNT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_CC_CNT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_CC_CNT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_CC_CNT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_CC_CNT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_CC_CNT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_CC_CNT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CC_CNT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_CC_MEM_CFG 0x000AC890 /* Reset: POR */ +#define I40E_RPB_CC_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_CC_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_CC_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_CC_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_CC_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_CC_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_CC_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_CC_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_CC_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_CC_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_CC_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_CC_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_CC_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_CC_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_CC_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_CC_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_CC_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_CC_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_CFG_RME_SHIFT) +#define I40E_RPB_CC_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_CC_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_CC_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_CC_MEM_STATUS 0x000AC894 /* Reset: POR */ +#define I40E_RPB_CC_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_CC_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_CC_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_CC_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_CC_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_CC_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_CC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_CC_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_CLID_MEM_CFG 0x000AC870 /* Reset: POR */ +#define I40E_RPB_CLID_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_CLID_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_CLID_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_CLID_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_CLID_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_CLID_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_CLID_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_CLID_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_CLID_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_CLID_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_CFG_RME_SHIFT) +#define I40E_RPB_CLID_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_CLID_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_CLID_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_CLID_MEM_STATUS 0x000AC874 /* Reset: POR */ +#define I40E_RPB_CLID_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_CLID_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_CLID_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_CLID_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_CLID_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_CLID_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_DATA_PIPE_MEM_CFG(_i) (0x000AC898 + ((_i) * 4)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_RPB_DATA_PIPE_MEM_CFG_MAX_INDEX 7 +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_DATA_PIPE_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_DATA_PIPE_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_DATA_PIPE_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_DATA_PIPE_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_DATA_PIPE_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_DATA_PIPE_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_CFG_RME_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_DATA_PIPE_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_DATA_PIPE_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_DATA_PIPE_MEM_STATUS(_i) (0x000AC8B8 + ((_i) * 4)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_RPB_DATA_PIPE_MEM_STATUS_MAX_INDEX 7 +#define I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_DATA_PIPE_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_DATA_PIPE_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_DATA_PIPE_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_DATA_PIPE_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_DBG_ACC_CNT 0x000AC8E0 /* Reset: CORER */ +#define I40E_RPB_DBG_ACC_CNT_RPB_DBG_ACC_CNT_SHIFT 0 +#define I40E_RPB_DBG_ACC_CNT_RPB_DBG_ACC_CNT_MASK I40E_MASK(0xFFFF, I40E_RPB_DBG_ACC_CNT_RPB_DBG_ACC_CNT_SHIFT) + +#define I40E_RPB_DBG_ACC_CTL 0x000AC8E4 /* Reset: CORER */ +#define I40E_RPB_DBG_ACC_CTL_ADDR_SHIFT 0 +#define I40E_RPB_DBG_ACC_CTL_ADDR_MASK I40E_MASK(0xFFFF, I40E_RPB_DBG_ACC_CTL_ADDR_SHIFT) +#define I40E_RPB_DBG_ACC_CTL_MEM_SEL_SHIFT 16 +#define I40E_RPB_DBG_ACC_CTL_MEM_SEL_MASK I40E_MASK(0xF, I40E_RPB_DBG_ACC_CTL_MEM_SEL_SHIFT) +#define I40E_RPB_DBG_ACC_CTL_EXECUTE_SHIFT 20 +#define I40E_RPB_DBG_ACC_CTL_EXECUTE_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_CTL_EXECUTE_SHIFT) + +#define I40E_RPB_DBG_ACC_DATA(_i) (0x000AC8EC + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_RPB_DBG_ACC_DATA_MAX_INDEX 7 +#define I40E_RPB_DBG_ACC_DATA_RPB_DBG_READ_DATA_SHIFT 0 +#define I40E_RPB_DBG_ACC_DATA_RPB_DBG_READ_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_RPB_DBG_ACC_DATA_RPB_DBG_READ_DATA_SHIFT) + +#define I40E_RPB_DBG_ACC_STAT 0x000AC8E8 /* Reset: CORER */ +#define I40E_RPB_DBG_ACC_STAT_READY_SHIFT 0 +#define I40E_RPB_DBG_ACC_STAT_READY_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_STAT_READY_SHIFT) +#define I40E_RPB_DBG_ACC_STAT_BUSY_SHIFT 1 +#define I40E_RPB_DBG_ACC_STAT_BUSY_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_STAT_BUSY_SHIFT) +#define I40E_RPB_DBG_ACC_STAT_ADDR_ERR_SHIFT 4 +#define I40E_RPB_DBG_ACC_STAT_ADDR_ERR_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_STAT_ADDR_ERR_SHIFT) +#define I40E_RPB_DBG_ACC_STAT_SEL_ERR_SHIFT 5 +#define I40E_RPB_DBG_ACC_STAT_SEL_ERR_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_STAT_SEL_ERR_SHIFT) +#define I40E_RPB_DBG_ACC_STAT_WD_ERR_SHIFT 6 +#define I40E_RPB_DBG_ACC_STAT_WD_ERR_MASK I40E_MASK(0x1, I40E_RPB_DBG_ACC_STAT_WD_ERR_SHIFT) + +#define I40E_RPB_DBG_FEAT 0x000AC940 /* Reset: CORER */ +#define I40E_RPB_DBG_FEAT_DISABLE_REPORTS_SHIFT 0 +#define I40E_RPB_DBG_FEAT_DISABLE_REPORTS_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_REPORTS_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_RELEASE_SHIFT 1 +#define I40E_RPB_DBG_FEAT_DISABLE_RELEASE_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_RELEASE_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_CC_SHIFT 2 +#define I40E_RPB_DBG_FEAT_DISABLE_CC_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_CC_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_SHR_MODE_SHIFT 3 +#define I40E_RPB_DBG_FEAT_DISABLE_SHR_MODE_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_SHR_MODE_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_RCU_EGR_SHIFT 4 +#define I40E_RPB_DBG_FEAT_DISABLE_RCU_EGR_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_RCU_EGR_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_PE0_EGR_SHIFT 5 +#define I40E_RPB_DBG_FEAT_DISABLE_PE0_EGR_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_PE0_EGR_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_PE1_EGR_SHIFT 6 +#define I40E_RPB_DBG_FEAT_DISABLE_PE1_EGR_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_PE1_EGR_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_RDPU_EGR_SHIFT 7 +#define I40E_RPB_DBG_FEAT_DISABLE_RDPU_EGR_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_RDPU_EGR_SHIFT) +#define I40E_RPB_DBG_FEAT_FORCE_FC_PORT_SHIFT 8 +#define I40E_RPB_DBG_FEAT_FORCE_FC_PORT_MASK I40E_MASK(0xF, I40E_RPB_DBG_FEAT_FORCE_FC_PORT_SHIFT) +#define I40E_RPB_DBG_FEAT_FORCE_TPB_FC_PORT_SHIFT 12 +#define I40E_RPB_DBG_FEAT_FORCE_TPB_FC_PORT_MASK I40E_MASK(0xF, I40E_RPB_DBG_FEAT_FORCE_TPB_FC_PORT_SHIFT) +#define I40E_RPB_DBG_FEAT_FORCE_SHR_MODE_SHIFT 16 +#define I40E_RPB_DBG_FEAT_FORCE_SHR_MODE_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_FORCE_SHR_MODE_SHIFT) +#define I40E_RPB_DBG_FEAT_DISABLE_ECB_SYNC_SHIFT 17 +#define I40E_RPB_DBG_FEAT_DISABLE_ECB_SYNC_MASK I40E_MASK(0x1, I40E_RPB_DBG_FEAT_DISABLE_ECB_SYNC_SHIFT) +#define I40E_RPB_DBG_FEAT_LTR_CLK_GEN_VAL_SHIFT 20 +#define I40E_RPB_DBG_FEAT_LTR_CLK_GEN_VAL_MASK I40E_MASK(0xFFF, I40E_RPB_DBG_FEAT_LTR_CLK_GEN_VAL_SHIFT) + +#define I40E_RPB_ECC_COR_ERR 0x000AC8DC /* Reset: POR */ +#define I40E_RPB_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_RPB_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RPB_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_RPB_ECC_UNCOR_ERR 0x000AC8D8 /* Reset: POR */ +#define I40E_RPB_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_RPB_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_RPB_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_RPB_EGR_CNT 0x000AC94C /* Reset: CORER */ +#define I40E_RPB_EGR_CNT_RCU_REQ_SHIFT 0 +#define I40E_RPB_EGR_CNT_RCU_REQ_MASK I40E_MASK(0xFF, I40E_RPB_EGR_CNT_RCU_REQ_SHIFT) +#define I40E_RPB_EGR_CNT_PE_0_REQ_SHIFT 8 +#define I40E_RPB_EGR_CNT_PE_0_REQ_MASK I40E_MASK(0xFF, I40E_RPB_EGR_CNT_PE_0_REQ_SHIFT) +#define I40E_RPB_EGR_CNT_PE_1_REQ_SHIFT 16 +#define I40E_RPB_EGR_CNT_PE_1_REQ_MASK I40E_MASK(0xFF, I40E_RPB_EGR_CNT_PE_1_REQ_SHIFT) +#define I40E_RPB_EGR_CNT_RDPU_REQ_SHIFT 24 +#define I40E_RPB_EGR_CNT_RDPU_REQ_MASK I40E_MASK(0xFF, I40E_RPB_EGR_CNT_RDPU_REQ_SHIFT) + +#define I40E_RPB_GEN_DBG_CNT 0x000AC944 /* Reset: CORER */ +#define I40E_RPB_GEN_DBG_CNT_FREE_CC_SHIFT 0 +#define I40E_RPB_GEN_DBG_CNT_FREE_CC_MASK I40E_MASK(0x1FF, I40E_RPB_GEN_DBG_CNT_FREE_CC_SHIFT) +#define I40E_RPB_GEN_DBG_CNT_FREE_CLIDS_SHIFT 16 +#define I40E_RPB_GEN_DBG_CNT_FREE_CLIDS_MASK I40E_MASK(0x3FFF, I40E_RPB_GEN_DBG_CNT_FREE_CLIDS_SHIFT) + +#define I40E_RPB_PKT_MEM_CFG 0x000AC868 /* Reset: POR */ +#define I40E_RPB_PKT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_PKT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_PKT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_PKT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_PKT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_PKT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_PKT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_PKT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_PKT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_PKT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_CFG_RME_SHIFT) +#define I40E_RPB_PKT_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_PKT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_PKT_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_PKT_MEM_STATUS 0x000AC86C /* Reset: POR */ +#define I40E_RPB_PKT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_PKT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_PKT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_PKT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_PKT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_PKT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_PPDB_MEM_CFG 0x000AC878 /* Reset: POR */ +#define I40E_RPB_PPDB_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_PPDB_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_PPDB_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_PPDB_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_PPDB_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_PPDB_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_PPDB_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_RME_A_SHIFT 12 +#define I40E_RPB_PPDB_MEM_CFG_RME_A_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_RME_A_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_RME_B_SHIFT 13 +#define I40E_RPB_PPDB_MEM_CFG_RME_B_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_CFG_RME_B_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_RM_A_SHIFT 16 +#define I40E_RPB_PPDB_MEM_CFG_RM_A_MASK I40E_MASK(0xF, I40E_RPB_PPDB_MEM_CFG_RM_A_SHIFT) +#define I40E_RPB_PPDB_MEM_CFG_RM_B_SHIFT 20 +#define I40E_RPB_PPDB_MEM_CFG_RM_B_MASK I40E_MASK(0xF, I40E_RPB_PPDB_MEM_CFG_RM_B_SHIFT) + +#define I40E_RPB_PPDB_MEM_STATUS 0x000AC87C /* Reset: POR */ +#define I40E_RPB_PPDB_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_PPDB_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_PPDB_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_PPDB_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_PPDB_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_PPDB_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_PPDB_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_PPDB_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_PPDB_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_PPRS_ERR_CNT(_i) (0x000AC910 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_RPB_PPRS_ERR_CNT_MAX_INDEX 3 +#define I40E_RPB_PPRS_ERR_CNT_PKT_SIZE_ERR_SHIFT 0 +#define I40E_RPB_PPRS_ERR_CNT_PKT_SIZE_ERR_MASK I40E_MASK(0xFF, I40E_RPB_PPRS_ERR_CNT_PKT_SIZE_ERR_SHIFT) +#define I40E_RPB_PPRS_ERR_CNT_VALID_BTW_PKT_SHIFT 8 +#define I40E_RPB_PPRS_ERR_CNT_VALID_BTW_PKT_MASK I40E_MASK(0xFF, I40E_RPB_PPRS_ERR_CNT_VALID_BTW_PKT_SHIFT) + +#define I40E_RPB_REPORT_LL_MEM_CFG 0x000AC880 /* Reset: POR */ +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_REPORT_LL_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_REPORT_LL_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_REPORT_LL_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_REPORT_LL_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_REPORT_LL_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_REPORT_LL_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_CFG_RME_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_REPORT_LL_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_REPORT_LL_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_REPORT_LL_MEM_STATUS 0x000AC884 /* Reset: POR */ +#define I40E_RPB_REPORT_LL_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_REPORT_LL_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_REPORT_LL_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_REPORT_LL_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_REPORT_MEM_CFG 0x000AC888 /* Reset: POR */ +#define I40E_RPB_REPORT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_RPB_REPORT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_RPB_REPORT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_RPB_REPORT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_RPB_REPORT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_RPB_REPORT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_RPB_REPORT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_RME_SHIFT 12 +#define I40E_RPB_REPORT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_CFG_RME_SHIFT) +#define I40E_RPB_REPORT_MEM_CFG_RM_SHIFT 16 +#define I40E_RPB_REPORT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_RPB_REPORT_MEM_CFG_RM_SHIFT) + +#define I40E_RPB_REPORT_MEM_STATUS 0x000AC88C /* Reset: POR */ +#define I40E_RPB_REPORT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_RPB_REPORT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_RPB_REPORT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_RPB_REPORT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_RPB_REPORT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_RPB_REPORT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_RPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_RPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_RPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_RPB_RPT_CNT 0x000AC950 /* Reset: CORER */ +#define I40E_RPB_RPT_CNT_RPB_RPT_CNT_SHIFT 0 +#define I40E_RPB_RPT_CNT_RPB_RPT_CNT_MASK I40E_MASK(0xFFFF, I40E_RPB_RPT_CNT_RPB_RPT_CNT_SHIFT) + +#define I40E_RPB_RPT_STAT 0x000AC954 /* Reset: CORER */ +#define I40E_RPB_RPT_STAT_RPB_RPT_STAT_SHIFT 0 +#define I40E_RPB_RPT_STAT_RPB_RPT_STAT_MASK I40E_MASK(0xFFFFFFFF, I40E_RPB_RPT_STAT_RPB_RPT_STAT_SHIFT) + +#define I40E_RPB_SHR_MOD_CNT 0x000AC90C /* Reset: CORER */ +#define I40E_RPB_SHR_MOD_CNT_RPB_SHR_MOD_CNT_SHIFT 0 +#define I40E_RPB_SHR_MOD_CNT_RPB_SHR_MOD_CNT_MASK I40E_MASK(0xFFFFFFFF, I40E_RPB_SHR_MOD_CNT_RPB_SHR_MOD_CNT_SHIFT) + +#define I40E_TCB_ECC_COR_ERR 0x000AE0A8 /* Reset: POR */ +#define I40E_TCB_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_TCB_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TCB_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_TCB_ECC_UNCOR_ERR 0x000AE0A4 /* Reset: POR */ +#define I40E_TCB_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_TCB_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TCB_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL 0x000AE0B4 /* Reset: CORER */ +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_ADR_SHIFT 0 +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TCB_PORT_CMD_BUF_DBG_CTL_ADR_SHIFT) +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TCB_PORT_CMD_BUF_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TCB_PORT_CMD_BUF_DBG_CTL_RD_EN_SHIFT) +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_DONE_SHIFT 31 +#define I40E_TCB_PORT_CMD_BUF_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TCB_PORT_CMD_BUF_DBG_CTL_DONE_SHIFT) + +#define I40E_TCB_PORT_CMD_BUF_DBG_DATA 0x000AE0CC /* Reset: CORER */ +#define I40E_TCB_PORT_CMD_BUF_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TCB_PORT_CMD_BUF_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TCB_PORT_CMD_BUF_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL 0x000AE0B8 /* Reset: CORER */ +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_ADR_SHIFT 0 +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TCB_PORT_CMD_MNG_DBG_CTL_ADR_SHIFT) +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TCB_PORT_CMD_MNG_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TCB_PORT_CMD_MNG_DBG_CTL_RD_EN_SHIFT) +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_DONE_SHIFT 31 +#define I40E_TCB_PORT_CMD_MNG_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TCB_PORT_CMD_MNG_DBG_CTL_DONE_SHIFT) + +#define I40E_TCB_PORT_CMD_MNG_DBG_DATA 0x000AE0C0 /* Reset: CORER */ +#define I40E_TCB_PORT_CMD_MNG_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TCB_PORT_CMD_MNG_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TCB_PORT_CMD_MNG_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL 0x000AE0BC /* Reset: CORER */ +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_ADR_SHIFT 0 +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TCB_WAIT_CMD_BUF_DBG_CTL_ADR_SHIFT) +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TCB_WAIT_CMD_BUF_DBG_CTL_RD_EN_SHIFT) +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DONE_SHIFT 31 +#define I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TCB_WAIT_CMD_BUF_DBG_CTL_DONE_SHIFT) + +#define I40E_TCB_WAIT_CMD_BUF_DBG_DATA 0x000AE0C4 /* Reset: CORER */ +#define I40E_TCB_WAIT_CMD_BUF_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TCB_WAIT_CMD_BUF_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TCB_WAIT_CMD_BUF_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL 0x000AE0B0 /* Reset: CORER */ +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_ADR_SHIFT 0 +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TCB_WAIT_CMD_MNG_DBG_CTL_ADR_SHIFT) +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TCB_WAIT_CMD_MNG_DBG_CTL_RD_EN_SHIFT) +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DONE_SHIFT 31 +#define I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TCB_WAIT_CMD_MNG_DBG_CTL_DONE_SHIFT) + +#define I40E_TCB_WAIT_CMD_MNG_DBG_DATA 0x000AE0C8 /* Reset: CORER */ +#define I40E_TCB_WAIT_CMD_MNG_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TCB_WAIT_CMD_MNG_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TCB_WAIT_CMD_MNG_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TDPU_CMD_MUX_MEM_CFG 0x00044304 /* Reset: POR */ +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_CMD_MUX_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_CMD_MUX_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_CMD_MUX_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_CMD_MUX_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_CMD_MUX_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_CMD_MUX_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_CMD_MUX_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_CMD_MUX_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_CMD_MUX_MEM_STATUS 0x00044330 /* Reset: POR */ +#define I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_CMD_MUX_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_CMD_MUX_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_CMD_MUX_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_CMD_MUX_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_DAC_MEM_CFG 0x00044310 /* Reset: POR */ +#define I40E_TDPU_DAC_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_DAC_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_DAC_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_DAC_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_DAC_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_DAC_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_DAC_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_DAC_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_DAC_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_DAC_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_DAC_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_DAC_MEM_STATUS 0x00044328 /* Reset: POR */ +#define I40E_TDPU_DAC_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_DAC_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_DAC_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_DAC_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_DAC_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_DAC_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_DAC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_DAC_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_DAC_MNG_MEM_CFG 0x0004430C /* Reset: POR */ +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_DAC_MNG_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_DAC_MNG_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_DAC_MNG_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_DAC_MNG_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_DAC_MNG_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_DAC_MNG_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_DAC_MNG_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_DAC_MNG_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_DAC_MNG_MEM_STATUS 0x0004432C /* Reset: POR */ +#define I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_DAC_MNG_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_DAC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_DAC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_DAC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_ECC_COR_ERR 0x0004433C /* Reset: POR */ +#define I40E_TDPU_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_TDPU_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TDPU_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_TDPU_ECC_UNCOR_ERR 0x00044338 /* Reset: POR */ +#define I40E_TDPU_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_TDPU_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TDPU_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_TDPU_IMEM_CFG 0x000442F8 /* Reset: POR */ +#define I40E_TDPU_IMEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_IMEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_IMEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_IMEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_IMEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_IMEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_IMEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_IMEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_IMEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_IMEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_IMEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_IMEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_IMEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_IMEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_IMEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_IMEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_IMEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_IMEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_CFG_RME_SHIFT) +#define I40E_TDPU_IMEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_IMEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_IMEM_CFG_RM_SHIFT) + +#define I40E_TDPU_IMEM_STATUS 0x00044318 /* Reset: POR */ +#define I40E_TDPU_IMEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_IMEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_IMEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_IMEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_IMEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_IMEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_IMEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_IMEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_IMEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_RECIPE_ADDR_CFG 0x000442FC /* Reset: POR */ +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_RECIPE_ADDR_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_RECIPE_ADDR_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_RECIPE_ADDR_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_RECIPE_ADDR_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_RECIPE_ADDR_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_RME_SHIFT 12 +#define I40E_TDPU_RECIPE_ADDR_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_CFG_RME_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_CFG_RM_SHIFT 16 +#define I40E_TDPU_RECIPE_ADDR_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_RECIPE_ADDR_CFG_RM_SHIFT) + +#define I40E_TDPU_RECIPE_ADDR_STATUS 0x0004431C /* Reset: POR */ +#define I40E_TDPU_RECIPE_ADDR_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_RECIPE_ADDR_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_RECIPE_ADDR_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_RECIPE_ADDR_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_RECIPE_ADDR_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_RECIPE_ADDR_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_RECIPE_ADDR_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_TDRD_MEM_CFG 0x00044314 /* Reset: POR */ +#define I40E_TDPU_TDRD_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_TDRD_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_TDRD_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_TDRD_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_TDRD_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_TDRD_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_TDRD_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_TDRD_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_TDRD_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_TDRD_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_TDRD_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_TDRD_MEM_STATUS 0x00044324 /* Reset: POR */ +#define I40E_TDPU_TDRD_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_TDRD_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_TDRD_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_TDRD_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_TDRD_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_TDRD_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_TDRD_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_TDRD_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_TDRD_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_TDWR_MEM_CFG 0x00044308 /* Reset: POR */ +#define I40E_TDPU_TDWR_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_TDWR_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_TDWR_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_TDWR_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_TDWR_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_TDWR_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_TDWR_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_TDWR_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_TDWR_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_TDWR_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_TDWR_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_TDWR_MEM_STATUS 0x00044334 /* Reset: POR */ +#define I40E_TDPU_TDWR_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_TDWR_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_TDWR_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_TDWR_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_TDWR_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_TDWR_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_TDWR_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_TDWR_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_TDWR_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG 0x00044300 /* Reset: POR */ +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RME_SHIFT 12 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RME_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RM_SHIFT 16 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TDPU_VSI_LY2_INSERT_MEM_CFG_RM_SHIFT) + +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS 0x00044320 /* Reset: POR */ +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TDPU_VSI_LY2_INSERT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TLAN_DEC_MEM_CFG 0x000E6490 /* Reset: POR */ +#define I40E_TLAN_DEC_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TLAN_DEC_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TLAN_DEC_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TLAN_DEC_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TLAN_DEC_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TLAN_DEC_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TLAN_DEC_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_RME_SHIFT 12 +#define I40E_TLAN_DEC_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_CFG_RME_SHIFT) +#define I40E_TLAN_DEC_MEM_CFG_RM_SHIFT 16 +#define I40E_TLAN_DEC_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TLAN_DEC_MEM_CFG_RM_SHIFT) + +#define I40E_TLAN_DEC_MEM_STATUS 0x000E6494 /* Reset: POR */ +#define I40E_TLAN_DEC_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TLAN_DEC_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TLAN_DEC_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TLAN_DEC_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TLAN_DEC_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TLAN_DEC_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TLAN_DEC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TLAN_DEC_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TLAN_DEC_MNG_MEM_CFG 0x000E64A0 /* Reset: POR */ +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TLAN_DEC_MNG_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TLAN_DEC_MNG_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TLAN_DEC_MNG_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TLAN_DEC_MNG_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TLAN_DEC_MNG_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_RME_SHIFT 12 +#define I40E_TLAN_DEC_MNG_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_CFG_RME_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_CFG_RM_SHIFT 16 +#define I40E_TLAN_DEC_MNG_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TLAN_DEC_MNG_MEM_CFG_RM_SHIFT) + +#define I40E_TLAN_DEC_MNG_MEM_STATUS 0x000E64A4 /* Reset: POR */ +#define I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TLAN_DEC_MNG_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TLAN_DEC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TLAN_DEC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TLAN_DEC_PTRS_MEM_CFG 0x000E6498 /* Reset: POR */ +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_RME_SHIFT 12 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_CFG_RME_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_CFG_RM_SHIFT 16 +#define I40E_TLAN_DEC_PTRS_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TLAN_DEC_PTRS_MEM_CFG_RM_SHIFT) + +#define I40E_TLAN_DEC_PTRS_MEM_STATUS 0x000E649C /* Reset: POR */ +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TLAN_DEC_PTRS_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_DEC_PTRS_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TLAN_ECC_COR_ERR 0x000E64B4 /* Reset: POR */ +#define I40E_TLAN_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_TLAN_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TLAN_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_TLAN_ECC_UNCOR_ERR 0x000E64B0 /* Reset: POR */ +#define I40E_TLAN_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_TLAN_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TLAN_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_TLAN_HEAD_WB_CFG 0x000E64A8 /* Reset: POR */ +#define I40E_TLAN_HEAD_WB_CFG_ECC_EN_SHIFT 0 +#define I40E_TLAN_HEAD_WB_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_ECC_EN_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_LS_FORCE_SHIFT 3 +#define I40E_TLAN_HEAD_WB_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_LS_FORCE_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TLAN_HEAD_WB_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_LS_BYPASS_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_MASK_INT_SHIFT 5 +#define I40E_TLAN_HEAD_WB_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_MASK_INT_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_FIX_CNT_SHIFT 8 +#define I40E_TLAN_HEAD_WB_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_FIX_CNT_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_ERR_CNT_SHIFT 9 +#define I40E_TLAN_HEAD_WB_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_ERR_CNT_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_RME_SHIFT 12 +#define I40E_TLAN_HEAD_WB_CFG_RME_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_CFG_RME_SHIFT) +#define I40E_TLAN_HEAD_WB_CFG_RM_SHIFT 16 +#define I40E_TLAN_HEAD_WB_CFG_RM_MASK I40E_MASK(0xF, I40E_TLAN_HEAD_WB_CFG_RM_SHIFT) + +#define I40E_TLAN_HEAD_WB_STATUS 0x000E64AC /* Reset: POR */ +#define I40E_TLAN_HEAD_WB_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TLAN_HEAD_WB_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_STATUS_ECC_ERR_SHIFT) +#define I40E_TLAN_HEAD_WB_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TLAN_HEAD_WB_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_STATUS_ECC_FIX_SHIFT) +#define I40E_TLAN_HEAD_WB_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TLAN_HEAD_WB_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_STATUS_INIT_DONE_SHIFT) +#define I40E_TLAN_HEAD_WB_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TLAN_HEAD_WB_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TLAN_HEAD_WB_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TPB_CLID_MEM_CFG 0x0009808C /* Reset: POR */ +#define I40E_TPB_CLID_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TPB_CLID_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TPB_CLID_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TPB_CLID_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TPB_CLID_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TPB_CLID_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TPB_CLID_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TPB_CLID_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TPB_CLID_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_RME_SHIFT 12 +#define I40E_TPB_CLID_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_CFG_RME_SHIFT) +#define I40E_TPB_CLID_MEM_CFG_RM_SHIFT 16 +#define I40E_TPB_CLID_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TPB_CLID_MEM_CFG_RM_SHIFT) + +#define I40E_TPB_CLID_MEM_DBG_CTL 0x000980C8 /* Reset: CORER */ +#define I40E_TPB_CLID_MEM_DBG_CTL_ADR_SHIFT 0 +#define I40E_TPB_CLID_MEM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TPB_CLID_MEM_DBG_CTL_ADR_SHIFT) +#define I40E_TPB_CLID_MEM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TPB_CLID_MEM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TPB_CLID_MEM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TPB_CLID_MEM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TPB_CLID_MEM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_DBG_CTL_RD_EN_SHIFT) +#define I40E_TPB_CLID_MEM_DBG_CTL_DONE_SHIFT 31 +#define I40E_TPB_CLID_MEM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_DBG_CTL_DONE_SHIFT) + +#define I40E_TPB_CLID_MEM_DBG_DATA 0x000980D4 /* Reset: CORER */ +#define I40E_TPB_CLID_MEM_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TPB_CLID_MEM_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_CLID_MEM_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TPB_CLID_MEM_STATUS 0x00098090 /* Reset: POR */ +#define I40E_TPB_CLID_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TPB_CLID_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TPB_CLID_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TPB_CLID_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TPB_CLID_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TPB_CLID_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_CLID_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TPB_DBG_FEAT 0x00098084 /* Reset: CORER */ +#define I40E_TPB_DBG_FEAT_DIS_MIB_SHIFT 0 +#define I40E_TPB_DBG_FEAT_DIS_MIB_MASK I40E_MASK(0xF, I40E_TPB_DBG_FEAT_DIS_MIB_SHIFT) +#define I40E_TPB_DBG_FEAT_FORCE_FC_IND_SHIFT 4 +#define I40E_TPB_DBG_FEAT_FORCE_FC_IND_MASK I40E_MASK(0xF, I40E_TPB_DBG_FEAT_FORCE_FC_IND_SHIFT) +#define I40E_TPB_DBG_FEAT_OBEY_FC_OVR_SHIFT 8 +#define I40E_TPB_DBG_FEAT_OBEY_FC_OVR_MASK I40E_MASK(0xF, I40E_TPB_DBG_FEAT_OBEY_FC_OVR_SHIFT) +#define I40E_TPB_DBG_FEAT_DIS_BURST_CTL_SHIFT 12 +#define I40E_TPB_DBG_FEAT_DIS_BURST_CTL_MASK I40E_MASK(0xF, I40E_TPB_DBG_FEAT_DIS_BURST_CTL_SHIFT) + +#define I40E_TPB_ECC_COR_ERR 0x000980B8 /* Reset: POR */ +#define I40E_TPB_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_TPB_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TPB_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_TPB_ECC_UNCOR_ERR 0x000980B4 /* Reset: POR */ +#define I40E_TPB_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_TPB_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TPB_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_TPB_FC_OVR 0x00098088 /* Reset: CORER */ +#define I40E_TPB_FC_OVR_TPB_FC_OVR_SHIFT 0 +#define I40E_TPB_FC_OVR_TPB_FC_OVR_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_FC_OVR_TPB_FC_OVR_SHIFT) + +#define I40E_TPB_PKT_MEM_CFG 0x00098094 /* Reset: POR */ +#define I40E_TPB_PKT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TPB_PKT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TPB_PKT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TPB_PKT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TPB_PKT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TPB_PKT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TPB_PKT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TPB_PKT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TPB_PKT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_RME_SHIFT 12 +#define I40E_TPB_PKT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_CFG_RME_SHIFT) +#define I40E_TPB_PKT_MEM_CFG_RM_SHIFT 16 +#define I40E_TPB_PKT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TPB_PKT_MEM_CFG_RM_SHIFT) + +#define I40E_TPB_PKT_MEM_DBG_CTL 0x000980CC /* Reset: CORER */ +#define I40E_TPB_PKT_MEM_DBG_CTL_ADR_SHIFT 0 +#define I40E_TPB_PKT_MEM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TPB_PKT_MEM_DBG_CTL_ADR_SHIFT) +#define I40E_TPB_PKT_MEM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TPB_PKT_MEM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TPB_PKT_MEM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TPB_PKT_MEM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TPB_PKT_MEM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_DBG_CTL_RD_EN_SHIFT) +#define I40E_TPB_PKT_MEM_DBG_CTL_DONE_SHIFT 31 +#define I40E_TPB_PKT_MEM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_DBG_CTL_DONE_SHIFT) + +#define I40E_TPB_PKT_MEM_DBG_DATA 0x000980E0 /* Reset: CORER */ +#define I40E_TPB_PKT_MEM_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TPB_PKT_MEM_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_PKT_MEM_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TPB_PKT_MEM_STATUS 0x00098098 /* Reset: POR */ +#define I40E_TPB_PKT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TPB_PKT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TPB_PKT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TPB_PKT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TPB_PKT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TPB_PKT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_PKT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TPB_REPORT_LL_MEM_CFG 0x0009809C /* Reset: POR */ +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TPB_REPORT_LL_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TPB_REPORT_LL_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TPB_REPORT_LL_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TPB_REPORT_LL_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TPB_REPORT_LL_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_RME_SHIFT 12 +#define I40E_TPB_REPORT_LL_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_CFG_RME_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_CFG_RM_SHIFT 16 +#define I40E_TPB_REPORT_LL_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TPB_REPORT_LL_MEM_CFG_RM_SHIFT) + +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL 0x000980C0 /* Reset: CORER */ +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_ADR_SHIFT 0 +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TPB_REPORT_LL_MEM_DBG_CTL_ADR_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TPB_REPORT_LL_MEM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_DBG_CTL_RD_EN_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_DONE_SHIFT 31 +#define I40E_TPB_REPORT_LL_MEM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_DBG_CTL_DONE_SHIFT) + +#define I40E_TPB_REPORT_LL_MEM_DBG_DATA 0x000980D8 /* Reset: CORER */ +#define I40E_TPB_REPORT_LL_MEM_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TPB_REPORT_LL_MEM_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_REPORT_LL_MEM_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TPB_REPORT_LL_MEM_STATUS 0x000980A0 /* Reset: POR */ +#define I40E_TPB_REPORT_LL_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TPB_REPORT_LL_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TPB_REPORT_LL_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TPB_REPORT_LL_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_LL_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TPB_REPORT_MEM_CFG 0x000980A4 /* Reset: POR */ +#define I40E_TPB_REPORT_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TPB_REPORT_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TPB_REPORT_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TPB_REPORT_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TPB_REPORT_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TPB_REPORT_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TPB_REPORT_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_RME_SHIFT 12 +#define I40E_TPB_REPORT_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_CFG_RME_SHIFT) +#define I40E_TPB_REPORT_MEM_CFG_RM_SHIFT 16 +#define I40E_TPB_REPORT_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TPB_REPORT_MEM_CFG_RM_SHIFT) + +#define I40E_TPB_REPORT_MEM_DBG_CTL 0x000980C4 /* Reset: CORER */ +#define I40E_TPB_REPORT_MEM_DBG_CTL_ADR_SHIFT 0 +#define I40E_TPB_REPORT_MEM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TPB_REPORT_MEM_DBG_CTL_ADR_SHIFT) +#define I40E_TPB_REPORT_MEM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TPB_REPORT_MEM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TPB_REPORT_MEM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TPB_REPORT_MEM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TPB_REPORT_MEM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_DBG_CTL_RD_EN_SHIFT) +#define I40E_TPB_REPORT_MEM_DBG_CTL_DONE_SHIFT 31 +#define I40E_TPB_REPORT_MEM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_DBG_CTL_DONE_SHIFT) + +#define I40E_TPB_REPORT_MEM_DBG_DATA 0x000980DC /* Reset: CORER */ +#define I40E_TPB_REPORT_MEM_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TPB_REPORT_MEM_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_REPORT_MEM_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TPB_REPORT_MEM_STATUS 0x000980A8 /* Reset: POR */ +#define I40E_TPB_REPORT_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TPB_REPORT_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TPB_REPORT_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TPB_REPORT_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TPB_REPORT_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TPB_REPORT_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_REPORT_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TPB_RPB_BUFF_MEM_CFG 0x000980AC /* Reset: POR */ +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_ECC_EN_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_TPB_RPB_BUFF_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TPB_RPB_BUFF_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_TPB_RPB_BUFF_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_MASK_INT_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_TPB_RPB_BUFF_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_TPB_RPB_BUFF_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_RME_SHIFT 12 +#define I40E_TPB_RPB_BUFF_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_CFG_RME_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_CFG_RM_SHIFT 16 +#define I40E_TPB_RPB_BUFF_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_TPB_RPB_BUFF_MEM_CFG_RM_SHIFT) + +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL 0x000980BC /* Reset: CORER */ +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_ADR_SHIFT 0 +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_TPB_RPB_BUFF_MEM_DBG_CTL_ADR_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_DBG_CTL_RD_EN_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DONE_SHIFT 31 +#define I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_DBG_CTL_DONE_SHIFT) + +#define I40E_TPB_RPB_BUFF_MEM_DBG_DATA 0x000980D0 /* Reset: CORER */ +#define I40E_TPB_RPB_BUFF_MEM_DBG_DATA_RD_DW_SHIFT 0 +#define I40E_TPB_RPB_BUFF_MEM_DBG_DATA_RD_DW_MASK I40E_MASK(0xFFFFFFFF, I40E_TPB_RPB_BUFF_MEM_DBG_DATA_RD_DW_SHIFT) + +#define I40E_TPB_RPB_BUFF_MEM_STATUS 0x000980B0 /* Reset: POR */ +#define I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TPB_RPB_BUFF_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_TPB_RPB_BUFF_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TPB_RPB_BUFF_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TPB_RPB_BUFF_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_BRANCH_TABLE_CFG 0x000B2218 /* Reset: POR */ +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_BRANCH_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_BRANCH_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_BRANCH_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_BRANCH_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_BRANCH_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_RME_SHIFT 12 +#define I40E_TSCD_BRANCH_TABLE_CFG_RME_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_CFG_RME_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_CFG_RM_SHIFT 16 +#define I40E_TSCD_BRANCH_TABLE_CFG_RM_MASK I40E_MASK(0xF, I40E_TSCD_BRANCH_TABLE_CFG_RM_SHIFT) + +#define I40E_TSCD_BRANCH_TABLE_STATUS 0x000B2230 /* Reset: POR */ +#define I40E_TSCD_BRANCH_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_BRANCH_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_BRANCH_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_BRANCH_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_BRANCH_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_BRANCH_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_BRANCH_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_BW_LIMIT_TABLE_CFG 0x000B2204 /* Reset: POR */ +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_A_SHIFT 12 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_A_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_A_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_B_SHIFT 13 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_B_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_CFG_RME_B_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_A_SHIFT 16 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_A_MASK I40E_MASK(0xF, I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_A_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_B_SHIFT 20 +#define I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_B_MASK I40E_MASK(0xF, I40E_TSCD_BW_LIMIT_TABLE_CFG_RM_B_SHIFT) + +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS 0x000B2228 /* Reset: POR */ +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_ECC_COR_ERR 0x000B223c /* Reset: POR */ +#define I40E_TSCD_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_TSCD_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TSCD_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_TSCD_ECC_UNCOR_ERR 0x000B2238 /* Reset: POR */ +#define I40E_TSCD_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_TSCD_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_TSCD_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_TSCD_NEXT_NODE_TABLE_CFG 0x000B220C /* Reset: POR */ +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_RME_SHIFT 12 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_RME_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_CFG_RME_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_RM_SHIFT 16 +#define I40E_TSCD_NEXT_NODE_TABLE_CFG_RM_MASK I40E_MASK(0xF, I40E_TSCD_NEXT_NODE_TABLE_CFG_RM_SHIFT) + +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS 0x000B222c /* Reset: POR */ +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_NEXT_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_NEXT_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_NODE_TABLE_CFG 0x000B2210 /* Reset: POR */ +#define I40E_TSCD_NODE_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_NODE_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_NODE_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_NODE_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_NODE_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_NODE_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_NODE_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_RME_SHIFT 12 +#define I40E_TSCD_NODE_TABLE_CFG_RME_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_CFG_RME_SHIFT) +#define I40E_TSCD_NODE_TABLE_CFG_RM_SHIFT 16 +#define I40E_TSCD_NODE_TABLE_CFG_RM_MASK I40E_MASK(0xF, I40E_TSCD_NODE_TABLE_CFG_RM_SHIFT) + +#define I40E_TSCD_NODE_TABLE_STATUS 0x000B2220 /* Reset: POR */ +#define I40E_TSCD_NODE_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_NODE_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_NODE_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_NODE_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_NODE_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_NODE_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_NODE_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_RL_MAP_TABLE_CFG 0x000B2214 /* Reset: POR */ +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_RL_MAP_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_RL_MAP_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_RL_MAP_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_RL_MAP_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_RL_MAP_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_RME_SHIFT 12 +#define I40E_TSCD_RL_MAP_TABLE_CFG_RME_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_CFG_RME_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_CFG_RM_SHIFT 16 +#define I40E_TSCD_RL_MAP_TABLE_CFG_RM_MASK I40E_MASK(0xF, I40E_TSCD_RL_MAP_TABLE_CFG_RM_SHIFT) + +#define I40E_TSCD_RL_MAP_TABLE_STATUS 0x000B2224 /* Reset: POR */ +#define I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_RL_MAP_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_RL_MAP_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_RL_MAP_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_RL_MAP_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG 0x000B2200 /* Reset: POR */ +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_EN_SHIFT 0 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_EN_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_1_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ECC_INVERT_2_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_FORCE_SHIFT 3 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_FORCE_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_BYPASS_SHIFT 4 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_LS_BYPASS_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_MASK_INT_SHIFT 5 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_MASK_INT_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_FIX_CNT_SHIFT 8 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_FIX_CNT_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ERR_CNT_SHIFT 9 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_ERR_CNT_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RME_SHIFT 12 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RME_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RME_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RM_SHIFT 16 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RM_MASK I40E_MASK(0xF, I40E_TSCD_SHARED_BW_LIMIT_TABLE_CFG_RM_SHIFT) + +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS 0x000B2234 /* Reset: POR */ +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_ERR_SHIFT 0 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_ERR_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_FIX_SHIFT 1 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_ECC_FIX_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_INIT_DONE_SHIFT 2 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_INIT_DONE_SHIFT) +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_TSCD_SHARED_BW_LIMIT_TABLE_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_TXDBG_GL_CNTRL 0x000BC000 /* Reset: CORER */ +#define I40E_TXDBG_GL_CNTRL_TXDBG_MODE_SHIFT 0 +#define I40E_TXDBG_GL_CNTRL_TXDBG_MODE_MASK I40E_MASK(0x7, I40E_TXDBG_GL_CNTRL_TXDBG_MODE_SHIFT) + +#define I40E_TXDBG_RD_ENTITY 0x000BC004 /* Reset: CORER */ +#define I40E_TXDBG_RD_ENTITY_RD_LOG_SHIFT 0 +#define I40E_TXDBG_RD_ENTITY_RD_LOG_MASK I40E_MASK(0xFFFFFFFF, I40E_TXDBG_RD_ENTITY_RD_LOG_SHIFT) + +#define I40E_TXDBG_RD_ENTITY_CNTRL 0x000BC008 /* Reset: CORER */ +#define I40E_TXDBG_RD_ENTITY_CNTRL_RD_ENTITY_NUM_SHIFT 0 +#define I40E_TXDBG_RD_ENTITY_CNTRL_RD_ENTITY_NUM_MASK I40E_MASK(0xFFF, I40E_TXDBG_RD_ENTITY_CNTRL_RD_ENTITY_NUM_SHIFT) + +#define I40E_TXUPDBG_ITR_CAUSE_CTL 0x000E0018 /* Reset: CORER */ +#define I40E_TXUPDBG_ITR_CAUSE_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_ITR_CAUSE_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_ITR_CAUSE_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_ITR_CAUSE_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_ITR_CAUSE_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_ITR_CAUSE_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_ITR_CAUSE_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_ITR_CAUSE_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_ITR_CAUSE_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_ITR_DONE_CTL 0x000E0020 /* Reset: CORER */ +#define I40E_TXUPDBG_ITR_DONE_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_ITR_DONE_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_ITR_DONE_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_ITR_DONE_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_ITR_DONE_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_ITR_DONE_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_ITR_DONE_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_ITR_DONE_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_ITR_DONE_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_ITR_EXP_CTL 0x000E001C /* Reset: CORER */ +#define I40E_TXUPDBG_ITR_EXP_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_ITR_EXP_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_ITR_EXP_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_ITR_EXP_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_ITR_EXP_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_ITR_EXP_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_ITR_EXP_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_ITR_EXP_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_ITR_EXP_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_MAC0IN_CTL 0x000E2008 /* Reset: CORER */ +#define I40E_TXUPDBG_MAC0IN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_MAC0IN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_MAC0IN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_MAC0IN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_MAC0IN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_MAC0IN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_MAC0IN_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_MAC0IN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_MAC0IN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_MAC1IN_CTL 0x000E200C /* Reset: CORER */ +#define I40E_TXUPDBG_MAC1IN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_MAC1IN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_MAC1IN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_MAC1IN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_MAC1IN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_MAC1IN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_MAC1IN_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_MAC1IN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_MAC1IN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_MAC2IN_CTL 0x000E2010 /* Reset: CORER */ +#define I40E_TXUPDBG_MAC2IN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_MAC2IN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_MAC2IN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_MAC2IN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_MAC2IN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_MAC2IN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_MAC2IN_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_MAC2IN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_MAC2IN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_MAC3IN_CTL 0x000E2014 /* Reset: CORER */ +#define I40E_TXUPDBG_MAC3IN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_MAC3IN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_MAC3IN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_MAC3IN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_MAC3IN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_MAC3IN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_MAC3IN_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_MAC3IN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_MAC3IN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_MSIX_CTL 0x000BC00C /* Reset: CORER */ +#define I40E_TXUPDBG_MSIX_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_MSIX_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_MSIX_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_MSIX_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_MSIX_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_MSIX_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_MSIX_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_MSIX_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_MSIX_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_Q_SCHED_CTL 0x000E000C /* Reset: CORER */ +#define I40E_TXUPDBG_Q_SCHED_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_Q_SCHED_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_Q_SCHED_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_Q_SCHED_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_Q_SCHED_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_Q_SCHED_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_Q_SCHED_CTL_FLOW_CHOOSER_SHIFT 13 +#define I40E_TXUPDBG_Q_SCHED_CTL_FLOW_CHOOSER_MASK I40E_MASK(0x3, I40E_TXUPDBG_Q_SCHED_CTL_FLOW_CHOOSER_SHIFT) +#define I40E_TXUPDBG_Q_SCHED_CTL_EVENT_ID_SHIFT 15 +#define I40E_TXUPDBG_Q_SCHED_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_Q_SCHED_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_QG_SCHED_CTL 0x000E0008 /* Reset: CORER */ +#define I40E_TXUPDBG_QG_SCHED_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_QG_SCHED_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_QG_SCHED_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_QG_SCHED_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_QG_SCHED_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_QG_SCHED_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_QG_SCHED_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_QG_SCHED_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_QG_SCHED_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_TAIL_BUMP_CTL 0x000E0000 /* Reset: CORER */ +#define I40E_TXUPDBG_TAIL_BUMP_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_TAIL_BUMP_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_TAIL_BUMP_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_TAIL_BUMP_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_TAIL_BUMP_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_TAIL_BUMP_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_TAIL_BUMP_CTL_EVENT_ID_SHIFT 13 +#define I40E_TXUPDBG_TAIL_BUMP_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_TAIL_BUMP_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_TCBIN_CTL 0x000E0010 /* Reset: CORER */ +#define I40E_TXUPDBG_TCBIN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_TCBIN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_TCBIN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_TCBIN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_TCBIN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_TCBIN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_TCBIN_CTL_FLOW_CHOOSER_SHIFT 13 +#define I40E_TXUPDBG_TCBIN_CTL_FLOW_CHOOSER_MASK I40E_MASK(0x3, I40E_TXUPDBG_TCBIN_CTL_FLOW_CHOOSER_SHIFT) +#define I40E_TXUPDBG_TCBIN_CTL_EVENT_ID_SHIFT 15 +#define I40E_TXUPDBG_TCBIN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_TCBIN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_TDPUIN_CTL 0x000E2000 /* Reset: CORER */ +#define I40E_TXUPDBG_TDPUIN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_TDPUIN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_TDPUIN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_TDPUIN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_TDPUIN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_TDPUIN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_TDPUIN_CTL_FLOW_CHOOSER_SHIFT 13 +#define I40E_TXUPDBG_TDPUIN_CTL_FLOW_CHOOSER_MASK I40E_MASK(0x3, I40E_TXUPDBG_TDPUIN_CTL_FLOW_CHOOSER_SHIFT) +#define I40E_TXUPDBG_TDPUIN_CTL_EVENT_ID_SHIFT 15 +#define I40E_TXUPDBG_TDPUIN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_TDPUIN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_TLAN2_CTL 0x000E0014 /* Reset: CORER */ +#define I40E_TXUPDBG_TLAN2_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_TLAN2_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_TLAN2_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_TLAN2_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_TLAN2_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_TLAN2_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_TLAN2_CTL_FLOW_CHOOSER_SHIFT 13 +#define I40E_TXUPDBG_TLAN2_CTL_FLOW_CHOOSER_MASK I40E_MASK(0x3, I40E_TXUPDBG_TLAN2_CTL_FLOW_CHOOSER_SHIFT) +#define I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_A_SHIFT 15 +#define I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_A_MASK I40E_MASK(0x7, I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_A_SHIFT) +#define I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_B_SHIFT 18 +#define I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_B_MASK I40E_MASK(0x7, I40E_TXUPDBG_TLAN2_CTL_EVENT_ID_B_SHIFT) + +#define I40E_TXUPDBG_TPBIN_CTL 0x000E2004 /* Reset: CORER */ +#define I40E_TXUPDBG_TPBIN_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_TPBIN_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_TPBIN_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_TPBIN_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_TPBIN_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_TPBIN_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_TPBIN_CTL_FLOW_CHOOSER_SHIFT 13 +#define I40E_TXUPDBG_TPBIN_CTL_FLOW_CHOOSER_MASK I40E_MASK(0x3, I40E_TXUPDBG_TPBIN_CTL_FLOW_CHOOSER_SHIFT) +#define I40E_TXUPDBG_TPBIN_CTL_EVENT_ID_SHIFT 15 +#define I40E_TXUPDBG_TPBIN_CTL_EVENT_ID_MASK I40E_MASK(0x7, I40E_TXUPDBG_TPBIN_CTL_EVENT_ID_SHIFT) + +#define I40E_TXUPDBG_WA_CTL 0x000E0004 /* Reset: CORER */ +#define I40E_TXUPDBG_WA_CTL_FILTER_FLOW_EN_SHIFT 0 +#define I40E_TXUPDBG_WA_CTL_FILTER_FLOW_EN_MASK I40E_MASK(0x1, I40E_TXUPDBG_WA_CTL_FILTER_FLOW_EN_SHIFT) +#define I40E_TXUPDBG_WA_CTL_FLOW_ID_SHIFT 1 +#define I40E_TXUPDBG_WA_CTL_FLOW_ID_MASK I40E_MASK(0xFFF, I40E_TXUPDBG_WA_CTL_FLOW_ID_SHIFT) +#define I40E_TXUPDBG_WA_CTL_EVENT_ID_A_SHIFT 13 +#define I40E_TXUPDBG_WA_CTL_EVENT_ID_A_MASK I40E_MASK(0x7, I40E_TXUPDBG_WA_CTL_EVENT_ID_A_SHIFT) +#define I40E_TXUPDBG_WA_CTL_EVENT_ID_B_SHIFT 16 +#define I40E_TXUPDBG_WA_CTL_EVENT_ID_B_MASK I40E_MASK(0x7, I40E_TXUPDBG_WA_CTL_EVENT_ID_B_SHIFT) + +#define I40E_WAIT_CMD_BUF_MEM_CFG 0x000AE088 /* Reset: POR */ +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_ECC_EN_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_WAIT_CMD_BUF_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_WAIT_CMD_BUF_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_WAIT_CMD_BUF_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_MASK_INT_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_WAIT_CMD_BUF_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_WAIT_CMD_BUF_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_RME_SHIFT 12 +#define I40E_WAIT_CMD_BUF_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_CFG_RME_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_CFG_RM_SHIFT 16 +#define I40E_WAIT_CMD_BUF_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_WAIT_CMD_BUF_MEM_CFG_RM_SHIFT) + +#define I40E_WAIT_CMD_BUF_MEM_STATUS 0x000AE08C /* Reset: POR */ +#define I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_WAIT_CMD_BUF_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_WAIT_CMD_BUF_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_WAIT_CMD_BUF_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_BUF_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_WAIT_CMD_MNG_MEM_CFG 0x000AE084 /* Reset: POR */ +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_ECC_EN_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_WAIT_CMD_MNG_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_WAIT_CMD_MNG_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_WAIT_CMD_MNG_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_MASK_INT_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_WAIT_CMD_MNG_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_WAIT_CMD_MNG_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_RME_SHIFT 12 +#define I40E_WAIT_CMD_MNG_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_CFG_RME_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_CFG_RM_SHIFT 16 +#define I40E_WAIT_CMD_MNG_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_WAIT_CMD_MNG_MEM_CFG_RM_SHIFT) + +#define I40E_WAIT_CMD_MNG_MEM_STATUS 0x000AE090 /* Reset: POR */ +#define I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_ERR_SHIFT 0 +#define I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_ERR_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_FIX_SHIFT 1 +#define I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_STATUS_ECC_FIX_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_STATUS_INIT_DONE_SHIFT 2 +#define I40E_WAIT_CMD_MNG_MEM_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_STATUS_INIT_DONE_SHIFT) +#define I40E_WAIT_CMD_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_WAIT_CMD_MNG_MEM_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_WAIT_CMD_MNG_MEM_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_WUC_ECC_COR_ERR 0x0006E8AC /* Reset: POR */ +#define I40E_WUC_ECC_COR_ERR_CNT_SHIFT 0 +#define I40E_WUC_ECC_COR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_WUC_ECC_COR_ERR_CNT_SHIFT) + +#define I40E_WUC_ECC_UNCOR_ERR 0x0006E8A8 /* Reset: POR */ +#define I40E_WUC_ECC_UNCOR_ERR_CNT_SHIFT 0 +#define I40E_WUC_ECC_UNCOR_ERR_CNT_MASK I40E_MASK(0xFFF, I40E_WUC_ECC_UNCOR_ERR_CNT_SHIFT) + +#define I40E_WUC_SP_FLEX_CFG 0x0006E898 /* Reset: POR */ +#define I40E_WUC_SP_FLEX_CFG_ECC_EN_SHIFT 0 +#define I40E_WUC_SP_FLEX_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_ECC_EN_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_WUC_SP_FLEX_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_ECC_INVERT_1_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_WUC_SP_FLEX_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_ECC_INVERT_2_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_LS_FORCE_SHIFT 3 +#define I40E_WUC_SP_FLEX_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_LS_FORCE_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_LS_BYPASS_SHIFT 4 +#define I40E_WUC_SP_FLEX_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_LS_BYPASS_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_MASK_INT_SHIFT 5 +#define I40E_WUC_SP_FLEX_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_MASK_INT_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_FIX_CNT_SHIFT 8 +#define I40E_WUC_SP_FLEX_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_FIX_CNT_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_ERR_CNT_SHIFT 9 +#define I40E_WUC_SP_FLEX_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_ERR_CNT_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_RME_SHIFT 12 +#define I40E_WUC_SP_FLEX_CFG_RME_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_CFG_RME_SHIFT) +#define I40E_WUC_SP_FLEX_CFG_RM_SHIFT 16 +#define I40E_WUC_SP_FLEX_CFG_RM_MASK I40E_MASK(0xF, I40E_WUC_SP_FLEX_CFG_RM_SHIFT) + +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG 0x0006E890 /* Reset: POR */ +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_EN_SHIFT 0 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_EN_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_EN_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_1_SHIFT 1 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_1_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_1_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_2_SHIFT 2 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_2_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_ECC_INVERT_2_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_FORCE_SHIFT 3 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_FORCE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_FORCE_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_BYPASS_SHIFT 4 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_BYPASS_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_LS_BYPASS_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_MASK_INT_SHIFT 5 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_MASK_INT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_MASK_INT_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_FIX_CNT_SHIFT 8 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_FIX_CNT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_FIX_CNT_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ERR_CNT_SHIFT 9 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_ERR_CNT_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_ERR_CNT_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_RME_SHIFT 12 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_RME_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_MEM_CFG_RME_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_RM_SHIFT 16 +#define I40E_WUC_SP_FLEX_MASK_MEM_CFG_RM_MASK I40E_MASK(0xF, I40E_WUC_SP_FLEX_MASK_MEM_CFG_RM_SHIFT) + +#define I40E_WUC_SP_FLEX_MASK_STATUS 0x0006E894 /* Reset: POR */ +#define I40E_WUC_SP_FLEX_MASK_STATUS_ECC_ERR_SHIFT 0 +#define I40E_WUC_SP_FLEX_MASK_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_STATUS_ECC_ERR_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_STATUS_ECC_FIX_SHIFT 1 +#define I40E_WUC_SP_FLEX_MASK_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_STATUS_ECC_FIX_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_STATUS_INIT_DONE_SHIFT 2 +#define I40E_WUC_SP_FLEX_MASK_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_STATUS_INIT_DONE_SHIFT) +#define I40E_WUC_SP_FLEX_MASK_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_WUC_SP_FLEX_MASK_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_MASK_STATUS_GLOBAL_INIT_DONE_SHIFT) + +#define I40E_WUC_SP_FLEX_STATUS 0x0006E89C /* Reset: POR */ +#define I40E_WUC_SP_FLEX_STATUS_ECC_ERR_SHIFT 0 +#define I40E_WUC_SP_FLEX_STATUS_ECC_ERR_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_STATUS_ECC_ERR_SHIFT) +#define I40E_WUC_SP_FLEX_STATUS_ECC_FIX_SHIFT 1 +#define I40E_WUC_SP_FLEX_STATUS_ECC_FIX_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_STATUS_ECC_FIX_SHIFT) +#define I40E_WUC_SP_FLEX_STATUS_INIT_DONE_SHIFT 2 +#define I40E_WUC_SP_FLEX_STATUS_INIT_DONE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_STATUS_INIT_DONE_SHIFT) +#define I40E_WUC_SP_FLEX_STATUS_GLOBAL_INIT_DONE_SHIFT 3 +#define I40E_WUC_SP_FLEX_STATUS_GLOBAL_INIT_DONE_MASK I40E_MASK(0x1, I40E_WUC_SP_FLEX_STATUS_GLOBAL_INIT_DONE_SHIFT) + +/* PF - Internal Fuses */ + +/* PF - Interrupt Registers */ + +#define I40E_GLINT_CTL 0x0003F800 /* Reset: CORER */ +#define I40E_GLINT_CTL_DIS_AUTOMASK_PF0_SHIFT 0 +#define I40E_GLINT_CTL_DIS_AUTOMASK_PF0_MASK I40E_MASK(0x1, I40E_GLINT_CTL_DIS_AUTOMASK_PF0_SHIFT) +#define I40E_GLINT_CTL_DIS_AUTOMASK_VF0_SHIFT 1 +#define I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK I40E_MASK(0x1, I40E_GLINT_CTL_DIS_AUTOMASK_VF0_SHIFT) +#define I40E_GLINT_CTL_DIS_AUTOMASK_N_SHIFT 2 +#define I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK I40E_MASK(0x1, I40E_GLINT_CTL_DIS_AUTOMASK_N_SHIFT) + +#define I40E_PFINT_ITR0_STAT(_i) (0x00038200 + ((_i) * 128)) /* _i=0...2 */ /* Reset: PFR */ +#define I40E_PFINT_ITR0_STAT_MAX_INDEX 2 +#define I40E_PFINT_ITR0_STAT_ITR_EXPIRE_SHIFT 0 +#define I40E_PFINT_ITR0_STAT_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_PFINT_ITR0_STAT_ITR_EXPIRE_SHIFT) +#define I40E_PFINT_ITR0_STAT_EVENT_SHIFT 1 +#define I40E_PFINT_ITR0_STAT_EVENT_MASK I40E_MASK(0x1, I40E_PFINT_ITR0_STAT_EVENT_SHIFT) +#define I40E_PFINT_ITR0_STAT_ITR_TIME_SHIFT 2 +#define I40E_PFINT_ITR0_STAT_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_PFINT_ITR0_STAT_ITR_TIME_SHIFT) + +#define I40E_PFINT_ITRN_STAT(_i, _INTPF) (0x00032000 + ((_i) * 2048 + (_INTPF) * 4)) /* _i=0...2, _INTPF=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_ITRN_STAT_MAX_INDEX 2 +#define I40E_PFINT_ITRN_STAT_ITR_EXPIRE_SHIFT 0 +#define I40E_PFINT_ITRN_STAT_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_PFINT_ITRN_STAT_ITR_EXPIRE_SHIFT) +#define I40E_PFINT_ITRN_STAT_EVENT_SHIFT 1 +#define I40E_PFINT_ITRN_STAT_EVENT_MASK I40E_MASK(0x1, I40E_PFINT_ITRN_STAT_EVENT_SHIFT) +#define I40E_PFINT_ITRN_STAT_ITR_TIME_SHIFT 2 +#define I40E_PFINT_ITRN_STAT_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_PFINT_ITRN_STAT_ITR_TIME_SHIFT) + +#define I40E_PFINT_RATE0_STAT 0x00038600 /* Reset: PFR */ +#define I40E_PFINT_RATE0_STAT_CREDIT_SHIFT 0 +#define I40E_PFINT_RATE0_STAT_CREDIT_MASK I40E_MASK(0xF, I40E_PFINT_RATE0_STAT_CREDIT_SHIFT) +#define I40E_PFINT_RATE0_STAT_INTRL_TIME_SHIFT 4 +#define I40E_PFINT_RATE0_STAT_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_PFINT_RATE0_STAT_INTRL_TIME_SHIFT) + +#define I40E_PFINT_RATEN_STAT(_INTPF) (0x00036000 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_RATEN_STAT_MAX_INDEX 511 +#define I40E_PFINT_RATEN_STAT_CREDIT_SHIFT 0 +#define I40E_PFINT_RATEN_STAT_CREDIT_MASK I40E_MASK(0xF, I40E_PFINT_RATEN_STAT_CREDIT_SHIFT) +#define I40E_PFINT_RATEN_STAT_INTRL_TIME_SHIFT 4 +#define I40E_PFINT_RATEN_STAT_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_PFINT_RATEN_STAT_INTRL_TIME_SHIFT) + +#define I40E_VFINT_ITR0_STAT(_i, _VF) (0x00029000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...2, _VF=0...127 */ /* Reset: VFR */ +#define I40E_VFINT_ITR0_STAT_MAX_INDEX 2 +#define I40E_VFINT_ITR0_STAT_ITR_EXPIRE_SHIFT 0 +#define I40E_VFINT_ITR0_STAT_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_VFINT_ITR0_STAT_ITR_EXPIRE_SHIFT) +#define I40E_VFINT_ITR0_STAT_EVENT_SHIFT 1 +#define I40E_VFINT_ITR0_STAT_EVENT_MASK I40E_MASK(0x1, I40E_VFINT_ITR0_STAT_EVENT_SHIFT) +#define I40E_VFINT_ITR0_STAT_ITR_TIME_SHIFT 2 +#define I40E_VFINT_ITR0_STAT_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR0_STAT_ITR_TIME_SHIFT) + +#define I40E_VFINT_ITRN_STAT(_i, _INTVF) (0x00022000 + ((_i) * 2048 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...511 */ /* Reset: VFR */ +#define I40E_VFINT_ITRN_STAT_MAX_INDEX 2 +#define I40E_VFINT_ITRN_STAT_ITR_EXPIRE_SHIFT 0 +#define I40E_VFINT_ITRN_STAT_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_VFINT_ITRN_STAT_ITR_EXPIRE_SHIFT) +#define I40E_VFINT_ITRN_STAT_EVENT_SHIFT 1 +#define I40E_VFINT_ITRN_STAT_EVENT_MASK I40E_MASK(0x1, I40E_VFINT_ITRN_STAT_EVENT_SHIFT) +#define I40E_VFINT_ITRN_STAT_ITR_TIME_SHIFT 2 +#define I40E_VFINT_ITRN_STAT_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN_STAT_ITR_TIME_SHIFT) + +#define I40E_VFINT_RATE0_STAT(_VF) (0x0002B000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFINT_RATE0_STAT_MAX_INDEX 127 +#define I40E_VFINT_RATE0_STAT_CREDIT_SHIFT 0 +#define I40E_VFINT_RATE0_STAT_CREDIT_MASK I40E_MASK(0xF, I40E_VFINT_RATE0_STAT_CREDIT_SHIFT) +#define I40E_VFINT_RATE0_STAT_INTRL_TIME_SHIFT 4 +#define I40E_VFINT_RATE0_STAT_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_VFINT_RATE0_STAT_INTRL_TIME_SHIFT) + +#define I40E_VFINT_RATEN_STAT(_INTVF) (0x00026000 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ +#define I40E_VFINT_RATEN_STAT_MAX_INDEX 511 +#define I40E_VFINT_RATEN_STAT_CREDIT_SHIFT 0 +#define I40E_VFINT_RATEN_STAT_CREDIT_MASK I40E_MASK(0xF, I40E_VFINT_RATEN_STAT_CREDIT_SHIFT) +#define I40E_VFINT_RATEN_STAT_INTRL_TIME_SHIFT 4 +#define I40E_VFINT_RATEN_STAT_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_VFINT_RATEN_STAT_INTRL_TIME_SHIFT) + +/* PF - LAN Transmit Receive Registers */ + +#define I40E_GLLAN_PF_RECIPE(_i) (0x0012A5E0 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLLAN_PF_RECIPE_MAX_INDEX 15 +#define I40E_GLLAN_PF_RECIPE_RECIPE_SHIFT 0 +#define I40E_GLLAN_PF_RECIPE_RECIPE_MASK I40E_MASK(0x3, I40E_GLLAN_PF_RECIPE_RECIPE_SHIFT) + +#define I40E_GLLAN_RCTL_1 0x0012A504 /* Reset: CORER */ +#define I40E_GLLAN_RCTL_1_RXMAX_EXPANSION_SHIFT 12 +#define I40E_GLLAN_RCTL_1_RXMAX_EXPANSION_MASK I40E_MASK(0xF, I40E_GLLAN_RCTL_1_RXMAX_EXPANSION_SHIFT) +#define I40E_GLLAN_RCTL_1_RXDWBCTL_SHIFT 16 +#define I40E_GLLAN_RCTL_1_RXDWBCTL_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_1_RXDWBCTL_SHIFT) +#define I40E_GLLAN_RCTL_1_RXDRDCTL_SHIFT 17 +#define I40E_GLLAN_RCTL_1_RXDRDCTL_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_1_RXDRDCTL_SHIFT) +#define I40E_GLLAN_RCTL_1_RXDESCRDROEN_SHIFT 18 +#define I40E_GLLAN_RCTL_1_RXDESCRDROEN_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_1_RXDESCRDROEN_SHIFT) +#define I40E_GLLAN_RCTL_1_RXDATAWRROEN_SHIFT 19 +#define I40E_GLLAN_RCTL_1_RXDATAWRROEN_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_1_RXDATAWRROEN_SHIFT) + +#define I40E_GLLAN_TCTL_0 0x000E6488 /* Reset: CORER */ +#define I40E_GLLAN_TCTL_0_TXLANTH_SHIFT 0 +#define I40E_GLLAN_TCTL_0_TXLANTH_MASK I40E_MASK(0x3F, I40E_GLLAN_TCTL_0_TXLANTH_SHIFT) +#define I40E_GLLAN_TCTL_0_TXDESCRDROEN_SHIFT 6 +#define I40E_GLLAN_TCTL_0_TXDESCRDROEN_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_0_TXDESCRDROEN_SHIFT) + +#define I40E_GLLAN_TCTL_1 0x000442F0 /* Reset: CORER */ +#define I40E_GLLAN_TCTL_1_TXMAX_EXPANSION_SHIFT 0 +#define I40E_GLLAN_TCTL_1_TXMAX_EXPANSION_MASK I40E_MASK(0xF, I40E_GLLAN_TCTL_1_TXMAX_EXPANSION_SHIFT) +#define I40E_GLLAN_TCTL_1_TXDATARDROEN_SHIFT 4 +#define I40E_GLLAN_TCTL_1_TXDATARDROEN_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_TXDATARDROEN_SHIFT) +#define I40E_GLLAN_TCTL_1_RCU_BYPASS_SHIFT 5 +#define I40E_GLLAN_TCTL_1_RCU_BYPASS_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_RCU_BYPASS_SHIFT) +#define I40E_GLLAN_TCTL_1_LSO_CACHE_BYPASS_SHIFT 6 +#define I40E_GLLAN_TCTL_1_LSO_CACHE_BYPASS_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_LSO_CACHE_BYPASS_SHIFT) +#define I40E_GLLAN_TCTL_1_DBG_WB_SEL_SHIFT 7 +#define I40E_GLLAN_TCTL_1_DBG_WB_SEL_MASK I40E_MASK(0xF, I40E_GLLAN_TCTL_1_DBG_WB_SEL_SHIFT) +#define I40E_GLLAN_TCTL_1_DBG_FORCE_RS_SHIFT 11 +#define I40E_GLLAN_TCTL_1_DBG_FORCE_RS_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_DBG_FORCE_RS_SHIFT) +#define I40E_GLLAN_TCTL_1_DBG_BYPASS_SHIFT 12 +#define I40E_GLLAN_TCTL_1_DBG_BYPASS_MASK I40E_MASK(0x3FF, I40E_GLLAN_TCTL_1_DBG_BYPASS_SHIFT) +#define I40E_GLLAN_TCTL_1_PRE_L2_ENA_SHIFT 22 +#define I40E_GLLAN_TCTL_1_PRE_L2_ENA_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_PRE_L2_ENA_SHIFT) +#define I40E_GLLAN_TCTL_1_UR_PROT_DIS_SHIFT 23 +#define I40E_GLLAN_TCTL_1_UR_PROT_DIS_MASK I40E_MASK(0x1, I40E_GLLAN_TCTL_1_UR_PROT_DIS_SHIFT) +#define I40E_GLLAN_TCTL_1_DBG_ECO_SHIFT 24 +#define I40E_GLLAN_TCTL_1_DBG_ECO_MASK I40E_MASK(0xFF, I40E_GLLAN_TCTL_1_DBG_ECO_SHIFT) + +#define I40E_GLLAN_TCTL_2 0x000AE080 /* Reset: CORER */ +#define I40E_GLLAN_TCTL_2_TXMAX_EXPANSION_SHIFT 0 +#define I40E_GLLAN_TCTL_2_TXMAX_EXPANSION_MASK I40E_MASK(0xF, I40E_GLLAN_TCTL_2_TXMAX_EXPANSION_SHIFT) +#define I40E_GLLAN_TCTL_2_STAT_DBG_ADDR_SHIFT 4 +#define I40E_GLLAN_TCTL_2_STAT_DBG_ADDR_MASK I40E_MASK(0x1F, I40E_GLLAN_TCTL_2_STAT_DBG_ADDR_SHIFT) +#define I40E_GLLAN_TCTL_2_STAT_DBG_DSEL_SHIFT 9 +#define I40E_GLLAN_TCTL_2_STAT_DBG_DSEL_MASK I40E_MASK(0x7, I40E_GLLAN_TCTL_2_STAT_DBG_DSEL_SHIFT) +#define I40E_GLLAN_TCTL_2_ECO_SHIFT 12 +#define I40E_GLLAN_TCTL_2_ECO_MASK I40E_MASK(0xFFFFF, I40E_GLLAN_TCTL_2_ECO_SHIFT) + +#define I40E_GLLAN_TXEMP_EN 0x000AE0AC /* Reset: CORER */ +#define I40E_GLLAN_TXEMP_EN_TXHOST_EN_SHIFT 0 +#define I40E_GLLAN_TXEMP_EN_TXHOST_EN_MASK I40E_MASK(0x1, I40E_GLLAN_TXEMP_EN_TXHOST_EN_SHIFT) + +#define I40E_GLLAN_TXHOST_EN 0x000A2208 /* Reset: CORER */ +#define I40E_GLLAN_TXHOST_EN_TXHOST_EN_SHIFT 0 +#define I40E_GLLAN_TXHOST_EN_TXHOST_EN_MASK I40E_MASK(0x1, I40E_GLLAN_TXHOST_EN_TXHOST_EN_SHIFT) + +#define I40E_GLRCU_INDIRECT_ADDRESS 0x001C0AA4 /* Reset: CORER */ +#define I40E_GLRCU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT 0 +#define I40E_GLRCU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLRCU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT) + +#define I40E_GLRCU_INDIRECT_DATA(_i) (0x001C0AA8 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GLRCU_INDIRECT_DATA_MAX_INDEX 1 +#define I40E_GLRCU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT 0 +#define I40E_GLRCU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLRCU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT) + +#define I40E_GLRCU_LB_INDIRECT_ADDRESS 0x00269BD4 /* Reset: CORER */ +#define I40E_GLRCU_LB_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT 0 +#define I40E_GLRCU_LB_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLRCU_LB_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT) + +#define I40E_GLRCU_LB_INDIRECT_DATA(_i) (0x00269898 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLRCU_LB_INDIRECT_DATA_MAX_INDEX 3 +#define I40E_GLRCU_LB_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT 0 +#define I40E_GLRCU_LB_INDIRECT_DATA_GLRCU_INDIRECT_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLRCU_LB_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT) + +#define I40E_GLRCU_RX_INDIRECT_ADDRESS 0x00269BCC /* Reset: CORER */ +#define I40E_GLRCU_RX_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT 0 +#define I40E_GLRCU_RX_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLRCU_RX_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT) + +#define I40E_GLRCU_RX_INDIRECT_DATA(_i) (0x00269888 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLRCU_RX_INDIRECT_DATA_MAX_INDEX 3 +#define I40E_GLRCU_RX_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT 0 +#define I40E_GLRCU_RX_INDIRECT_DATA_GLRCU_INDIRECT_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLRCU_RX_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT) + +#define I40E_GLRDPU_INDIRECT_ADDRESS 0x00051040 /* Reset: CORER */ +#define I40E_GLRDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT 0 +#define I40E_GLRDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLRDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT) + +#define I40E_GLRDPU_INDIRECT_DATA(_i) (0x00051044 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLRDPU_INDIRECT_DATA_MAX_INDEX 3 +#define I40E_GLRDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT 0 +#define I40E_GLRDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLRDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT) + +#define I40E_GLTDPU_INDIRECT_ADDRESS 0x00044264 /* Reset: CORER */ +#define I40E_GLTDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT 0 +#define I40E_GLTDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLTDPU_INDIRECT_ADDRESS_GLRCU_INDIRECT_ADDRESS_SHIFT) + +#define I40E_GLTDPU_INDIRECT_DATA(_i) (0x00044268 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLTDPU_INDIRECT_DATA_MAX_INDEX 3 +#define I40E_GLTDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT 0 +#define I40E_GLTDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLTDPU_INDIRECT_DATA_GLRCU_INDIRECT_DATA_SHIFT) + +#define I40E_GLTLAN_MIN_MAX_MSS 0x000E64dC /* Reset: CORER */ +#define I40E_GLTLAN_MIN_MAX_MSS_MAHDL_SHIFT 0 +#define I40E_GLTLAN_MIN_MAX_MSS_MAHDL_MASK I40E_MASK(0x3FFF, I40E_GLTLAN_MIN_MAX_MSS_MAHDL_SHIFT) +#define I40E_GLTLAN_MIN_MAX_MSS_MIHDL_SHIFT 16 +#define I40E_GLTLAN_MIN_MAX_MSS_MIHDL_MASK I40E_MASK(0x3FF, I40E_GLTLAN_MIN_MAX_MSS_MIHDL_SHIFT) +#define I40E_GLTLAN_MIN_MAX_MSS_RSV_SHIFT 26 +#define I40E_GLTLAN_MIN_MAX_MSS_RSV_MASK I40E_MASK(0x3F, I40E_GLTLAN_MIN_MAX_MSS_RSV_SHIFT) + +#define I40E_GLTLAN_MIN_MAX_PKT 0x000E64DC /* Reset: CORER */ +#define I40E_GLTLAN_MIN_MAX_PKT_MAHDL_SHIFT 0 +#define I40E_GLTLAN_MIN_MAX_PKT_MAHDL_MASK I40E_MASK(0x3FFF, I40E_GLTLAN_MIN_MAX_PKT_MAHDL_SHIFT) +#define I40E_GLTLAN_MIN_MAX_PKT_MIHDL_SHIFT 16 +#define I40E_GLTLAN_MIN_MAX_PKT_MIHDL_MASK I40E_MASK(0x3F, I40E_GLTLAN_MIN_MAX_PKT_MIHDL_SHIFT) +#define I40E_GLTLAN_MIN_MAX_PKT_RSV_SHIFT 22 +#define I40E_GLTLAN_MIN_MAX_PKT_RSV_MASK I40E_MASK(0x3FF, I40E_GLTLAN_MIN_MAX_PKT_RSV_SHIFT) + +#define I40E_PF_VT_PFALLOC_RLAN 0x0012A480 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_RLAN_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_RLAN_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_RLAN_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_RLAN_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_RLAN_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_RLAN_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_RLAN_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_RLAN_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_RLAN_VALID_SHIFT) + +#define I40E_PFLAN_QALLOC_CSR 0x00078E00 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_CSR_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_CSR_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_CSR_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_CSR_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_CSR_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_CSR_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_CSR_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_CSR_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_CSR_VALID_SHIFT) + +#define I40E_PFLAN_QALLOC_INT 0x0003F000 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_INT_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_INT_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_INT_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_INT_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_INT_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_INT_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_INT_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_INT_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_INT_VALID_SHIFT) + +#define I40E_PFLAN_QALLOC_PMAT 0x000C0600 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_PMAT_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_PMAT_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_PMAT_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_PMAT_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_PMAT_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_PMAT_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_PMAT_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_PMAT_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_PMAT_VALID_SHIFT) + +#define I40E_PFLAN_QALLOC_RCB 0x00122080 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_RCB_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_RCB_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_RCB_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_RCB_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_RCB_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_RCB_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_RCB_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_RCB_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_RCB_VALID_SHIFT) + +#define I40E_PFLAN_QALLOC_RCU 0x00246780 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_RCU_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_RCU_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_RCU_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_RCU_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_RCU_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_RCU_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_RCU_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_RCU_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_RCU_VALID_SHIFT) + +#define I40E_PRTLAN_RXEMP_EN 0x001E4780 /* Reset: GLOBR */ +#define I40E_PRTLAN_RXEMP_EN_RXHOST_EN_SHIFT 0 +#define I40E_PRTLAN_RXEMP_EN_RXHOST_EN_MASK I40E_MASK(0x1, I40E_PRTLAN_RXEMP_EN_RXHOST_EN_SHIFT) + +/* PF - MAC Registers */ + +#define I40E_PRTDCB_MPVCTL 0x001E2460 /* Reset: GLOBR */ +#define I40E_PRTDCB_MPVCTL_PFCV_SHIFT 0 +#define I40E_PRTDCB_MPVCTL_PFCV_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_MPVCTL_PFCV_SHIFT) +#define I40E_PRTDCB_MPVCTL_RFCV_SHIFT 16 +#define I40E_PRTDCB_MPVCTL_RFCV_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_MPVCTL_RFCV_SHIFT) + +#define I40E_PRTMAC_AN_LP_STATUS1 0x0008C680 /* Reset: GLOBR */ +#define I40E_PRTMAC_AN_LP_STATUS1_LP_AN_PAGE_LOW_SHIFT 0 +#define I40E_PRTMAC_AN_LP_STATUS1_LP_AN_PAGE_LOW_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_AN_LP_STATUS1_LP_AN_PAGE_LOW_SHIFT) +#define I40E_PRTMAC_AN_LP_STATUS1_AN_ARB_STATE_SHIFT 16 +#define I40E_PRTMAC_AN_LP_STATUS1_AN_ARB_STATE_MASK I40E_MASK(0xF, I40E_PRTMAC_AN_LP_STATUS1_AN_ARB_STATE_SHIFT) +#define I40E_PRTMAC_AN_LP_STATUS1_RSVD_SHIFT 20 +#define I40E_PRTMAC_AN_LP_STATUS1_RSVD_MASK I40E_MASK(0xFFF, I40E_PRTMAC_AN_LP_STATUS1_RSVD_SHIFT) + +#define I40E_PRTMAC_HLCTL 0x001E2000 /* Reset: GLOBR */ +#define I40E_PRTMAC_HLCTL_APPEND_CRC_SHIFT 0 +#define I40E_PRTMAC_HLCTL_APPEND_CRC_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_APPEND_CRC_SHIFT) +#define I40E_PRTMAC_HLCTL_RXCRCSTRP_SHIFT 1 +#define I40E_PRTMAC_HLCTL_RXCRCSTRP_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_RXCRCSTRP_SHIFT) +#define I40E_PRTMAC_HLCTL_JUMBOEN_SHIFT 2 +#define I40E_PRTMAC_HLCTL_JUMBOEN_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_JUMBOEN_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD3_SHIFT 3 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD3_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD3_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD4_SHIFT 4 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD4_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD4_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD5_SHIFT 5 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD5_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD5_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD6_SHIFT 6 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD6_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD6_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD7_SHIFT 7 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD7_MASK I40E_MASK(0x3, I40E_PRTMAC_HLCTL_LEGACY_RSVD7_SHIFT) +#define I40E_PRTMAC_HLCTL_SOFTRESET_SHIFT 9 +#define I40E_PRTMAC_HLCTL_SOFTRESET_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_SOFTRESET_SHIFT) +#define I40E_PRTMAC_HLCTL_TXPADEN_SHIFT 10 +#define I40E_PRTMAC_HLCTL_TXPADEN_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_TXPADEN_SHIFT) +#define I40E_PRTMAC_HLCTL_TX_ENABLE_SHIFT 11 +#define I40E_PRTMAC_HLCTL_TX_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_TX_ENABLE_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD12_SHIFT 12 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD12_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD12_SHIFT) +#define I40E_PRTMAC_HLCTL_RX_ENABLE_SHIFT 13 +#define I40E_PRTMAC_HLCTL_RX_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_RX_ENABLE_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD14_SHIFT 14 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD14_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD14_SHIFT) +#define I40E_PRTMAC_HLCTL_LPBK_SHIFT 15 +#define I40E_PRTMAC_HLCTL_LPBK_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LPBK_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD16_SHIFT 16 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD16_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD16_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD17_SHIFT 17 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD17_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD17_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD18_SHIFT 18 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD18_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_LEGACY_RSVD18_SHIFT) +#define I40E_PRTMAC_HLCTL_TXLB2NET_SHIFT 19 +#define I40E_PRTMAC_HLCTL_TXLB2NET_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_TXLB2NET_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD20_SHIFT 20 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD20_MASK I40E_MASK(0xF, I40E_PRTMAC_HLCTL_LEGACY_RSVD20_SHIFT) +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD24_SHIFT 24 +#define I40E_PRTMAC_HLCTL_LEGACY_RSVD24_MASK I40E_MASK(0x7, I40E_PRTMAC_HLCTL_LEGACY_RSVD24_SHIFT) +#define I40E_PRTMAC_HLCTL_RXLNGTHERREN_SHIFT 27 +#define I40E_PRTMAC_HLCTL_RXLNGTHERREN_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_RXLNGTHERREN_SHIFT) +#define I40E_PRTMAC_HLCTL_RXPADSTRIPEN_SHIFT 28 +#define I40E_PRTMAC_HLCTL_RXPADSTRIPEN_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTL_RXPADSTRIPEN_SHIFT) + +#define I40E_PRTMAC_HLCTLA 0x001E4760 /* Reset: GLOBR */ +#define I40E_PRTMAC_HLCTLA_DROP_US_PKTS_SHIFT 0 +#define I40E_PRTMAC_HLCTLA_DROP_US_PKTS_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTLA_DROP_US_PKTS_SHIFT) +#define I40E_PRTMAC_HLCTLA_RX_FWRD_CTRL_SHIFT 1 +#define I40E_PRTMAC_HLCTLA_RX_FWRD_CTRL_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTLA_RX_FWRD_CTRL_SHIFT) +#define I40E_PRTMAC_HLCTLA_CHOP_OS_PKT_SHIFT 2 +#define I40E_PRTMAC_HLCTLA_CHOP_OS_PKT_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTLA_CHOP_OS_PKT_SHIFT) +#define I40E_PRTMAC_HLCTLA_TX_HYSTERESIS_SHIFT 4 +#define I40E_PRTMAC_HLCTLA_TX_HYSTERESIS_MASK I40E_MASK(0x7, I40E_PRTMAC_HLCTLA_TX_HYSTERESIS_SHIFT) +#define I40E_PRTMAC_HLCTLA_HYS_FLUSH_PKT_SHIFT 7 +#define I40E_PRTMAC_HLCTLA_HYS_FLUSH_PKT_MASK I40E_MASK(0x1, I40E_PRTMAC_HLCTLA_HYS_FLUSH_PKT_SHIFT) + +#define I40E_PRTMAC_HLSTA 0x001E2020 /* Reset: GLOBR */ +#define I40E_PRTMAC_HLSTA_REVID_SHIFT 0 +#define I40E_PRTMAC_HLSTA_REVID_MASK I40E_MASK(0xF, I40E_PRTMAC_HLSTA_REVID_SHIFT) +#define I40E_PRTMAC_HLSTA_RESERVED_2_SHIFT 4 +#define I40E_PRTMAC_HLSTA_RESERVED_2_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_RESERVED_2_SHIFT) +#define I40E_PRTMAC_HLSTA_RXERRSYM_SHIFT 5 +#define I40E_PRTMAC_HLSTA_RXERRSYM_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_RXERRSYM_SHIFT) +#define I40E_PRTMAC_HLSTA_RXILLSYM_SHIFT 6 +#define I40E_PRTMAC_HLSTA_RXILLSYM_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_RXILLSYM_SHIFT) +#define I40E_PRTMAC_HLSTA_RXIDLERR_SHIFT 7 +#define I40E_PRTMAC_HLSTA_RXIDLERR_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_RXIDLERR_SHIFT) +#define I40E_PRTMAC_HLSTA_LEGACY_RSVD1_SHIFT 8 +#define I40E_PRTMAC_HLSTA_LEGACY_RSVD1_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_LEGACY_RSVD1_SHIFT) +#define I40E_PRTMAC_HLSTA_LEGACY_RSVD2_SHIFT 9 +#define I40E_PRTMAC_HLSTA_LEGACY_RSVD2_MASK I40E_MASK(0x1, I40E_PRTMAC_HLSTA_LEGACY_RSVD2_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_INTERNAL 0x001E3530 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_RX_SWZL_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_RX_SWZL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_RX_SWZL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_TX_SWZL_SHIFT 1 +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_TX_SWZL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_TX_SWZL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_CTL_RX_CHECK_ACK_SHIFT 2 +#define I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_CTL_RX_CHECK_ACK_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_INTERNAL_HSEC_CTL_RX_CHECK_ACK_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GCP 0x001E3160 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GCP_HSEC_CTL_RX_CHECK_ETYPE_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GCP_HSEC_CTL_RX_CHECK_ETYPE_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GCP_HSEC_CTL_RX_CHECK_ETYPE_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GPP 0x001E32A0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GPP_HSEC_CTL_RX_CHECK_ETYPE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GPP_HSEC_CTL_RX_CHECK_ETYPE_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_GPP_HSEC_CTL_RX_CHECK_ETYPE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PCP 0x001E3210 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PCP_HSEC_CTL_RX_CHECK_ETYPE_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PCP_HSEC_CTL_RX_CHECK_ETYPE_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PCP_HSEC_CTL_RX_CHECK_ETYPE_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PPP 0x001E3320 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PPP_HSEC_CTL_RX_CHECK_ETYPE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PPP_HSEC_CTL_RX_CHECK_ETYPE_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_ETYPE_PPP_HSEC_CTL_RX_CHECK_ETYPE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GCP 0x001E30F0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GCP_HSEC_CTL_RX_CHECK_MCAST_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GCP_HSEC_CTL_RX_CHECK_MCAST_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GCP_HSEC_CTL_RX_CHECK_MCAST_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GPP 0x001E3270 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GPP_HSEC_CTL_RX_CHECK_MCAST_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GPP_HSEC_CTL_RX_CHECK_MCAST_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_GPP_HSEC_CTL_RX_CHECK_MCAST_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PCP 0x001E31C0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PCP_HSEC_CTL_RX_CHECK_MCAST_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PCP_HSEC_CTL_RX_CHECK_MCAST_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PCP_HSEC_CTL_RX_CHECK_MCAST_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PPP 0x001E32F0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PPP_HSEC_CTL_RX_CHECK_MCAST_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PPP_HSEC_CTL_RX_CHECK_MCAST_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_MCAST_PPP_HSEC_CTL_RX_CHECK_MCAST_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GCP 0x001E3170 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GCP_HSEC_CTL_RX_CHECK_OPCODE_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GCP_HSEC_CTL_RX_CHECK_OPCODE_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GCP_HSEC_CTL_RX_CHECK_OPCODE_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GPP 0x001E32C0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GPP_HSEC_CTL_RX_CHECK_OPCODE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GPP_HSEC_CTL_RX_CHECK_OPCODE_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_GPP_HSEC_CTL_RX_CHECK_OPCODE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PCP 0x001E3230 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PCP_HSEC_CTL_RX_CHECK_OPCODE_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PCP_HSEC_CTL_RX_CHECK_OPCODE_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PCP_HSEC_CTL_RX_CHECK_OPCODE_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PPP 0x001E3340 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PPP_HSEC_CTL_RX_CHECK_OPCODE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PPP_HSEC_CTL_RX_CHECK_OPCODE_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_OPCODE_PPP_HSEC_CTL_RX_CHECK_OPCODE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GCP 0x001E3130 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GCP_HSEC_CTL_RX_CHECK_SA_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GCP_HSEC_CTL_RX_CHECK_SA_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GCP_HSEC_CTL_RX_CHECK_SA_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GPP 0x001E3290 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GPP_HSEC_CTL_RX_CHECK_SA_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GPP_HSEC_CTL_RX_CHECK_SA_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_GPP_HSEC_CTL_RX_CHECK_SA_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PCP 0x001E3200 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PCP_HSEC_CTL_RX_CHECK_SA_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PCP_HSEC_CTL_RX_CHECK_SA_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PCP_HSEC_CTL_RX_CHECK_SA_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PPP 0x001E3310 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PPP_HSEC_CTL_RX_CHECK_SA_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PPP_HSEC_CTL_RX_CHECK_SA_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_SA_PPP_HSEC_CTL_RX_CHECK_SA_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GCP 0x001E3100 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GCP_HSEC_CTL_RX_CHECK_UCAST_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GCP_HSEC_CTL_RX_CHECK_UCAST_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GCP_HSEC_CTL_RX_CHECK_UCAST_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GPP 0x001E3280 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GPP_HSEC_CTL_RX_CHECK_UCAST_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GPP_HSEC_CTL_RX_CHECK_UCAST_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_GPP_HSEC_CTL_RX_CHECK_UCAST_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PCP 0x001E31D0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PCP_HSEC_CTL_RX_CHECK_UCAST_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PCP_HSEC_CTL_RX_CHECK_UCAST_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PCP_HSEC_CTL_RX_CHECK_UCAST_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PPP 0x001E3300 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PPP_HSEC_CTL_RX_CHECK_UCAST_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PPP_HSEC_CTL_RX_CHECK_UCAST_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_CHECK_UCAST_PPP_HSEC_CTL_RX_CHECK_UCAST_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_DELETE_FCS 0x001E3080 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_DELETE_FCS_HSEC_CTL_RX_DELETE_FCS_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_DELETE_FCS_HSEC_CTL_RX_DELETE_FCS_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_DELETE_FCS_HSEC_CTL_RX_DELETE_FCS_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE 0x001E3070 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_HSEC_CTL_RX_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_HSEC_CTL_RX_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_HSEC_CTL_RX_ENABLE_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PCP 0x001E31B0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PCP_HSEC_CTL_RX_ENABLE_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PCP_HSEC_CTL_RX_ENABLE_PCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PCP_HSEC_CTL_RX_ENABLE_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GCP 0x001E31A0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GCP_HSEC_CTL_RX_ETYPE_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GCP_HSEC_CTL_RX_ETYPE_GCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GCP_HSEC_CTL_RX_ETYPE_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GPP 0x001E32B0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GPP_HSEC_CTL_RX_ETYPE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GPP_HSEC_CTL_RX_ETYPE_GPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_ETYPE_GPP_HSEC_CTL_RX_ETYPE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PCP 0x001E3220 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PCP_HSEC_CTL_RX_ETYPE_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PCP_HSEC_CTL_RX_ETYPE_PCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PCP_HSEC_CTL_RX_ETYPE_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PPP 0x001E3330 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PPP_HSEC_CTL_RX_ETYPE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PPP_HSEC_CTL_RX_ETYPE_PPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_ETYPE_PPP_HSEC_CTL_RX_ETYPE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_IGNORE_FCS 0x001E3090 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_IGNORE_FCS_HSEC_CTL_RX_IGNORE_FCS_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_IGNORE_FCS_HSEC_CTL_RX_IGNORE_FCS_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_IGNORE_FCS_HSEC_CTL_RX_IGNORE_FCS_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_MAX_PACKET_LEN 0x001E30A0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_MAX_PACKET_LEN_HSEC_CTL_RX_MAX_PACKET_LEN_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_MAX_PACKET_LEN_HSEC_CTL_RX_MAX_PACKET_LEN_MASK I40E_MASK(0x7FFF, I40E_PRTMAC_HSEC_CTL_RX_MAX_PACKET_LEN_HSEC_CTL_RX_MAX_PACKET_LEN_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_MIN_PACKET_LEN 0x001E30B0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_MIN_PACKET_LEN_HSEC_CTL_RX_MIN_PACKET_LEN_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_MIN_PACKET_LEN_HSEC_CTL_RX_MIN_PACKET_LEN_MASK I40E_MASK(0xFF, I40E_PRTMAC_HSEC_CTL_RX_MIN_PACKET_LEN_HSEC_CTL_RX_MIN_PACKET_LEN_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_GPP 0x001E32D0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_GPP_HSEC_CTL_RX_OPCODE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_GPP_HSEC_CTL_RX_OPCODE_GPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_GPP_HSEC_CTL_RX_OPCODE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_GCP 0x001E3190 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_GCP_HSEC_CTL_RX_OPCODE_MAX_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_GCP_HSEC_CTL_RX_OPCODE_MAX_GCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_GCP_HSEC_CTL_RX_OPCODE_MAX_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_PCP 0x001E3250 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_PCP_HSEC_CTL_RX_OPCODE_MAX_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_PCP_HSEC_CTL_RX_OPCODE_MAX_PCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MAX_PCP_HSEC_CTL_RX_OPCODE_MAX_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_GCP 0x001E3180 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_GCP_HSEC_CTL_RX_OPCODE_MIN_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_GCP_HSEC_CTL_RX_OPCODE_MIN_GCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_GCP_HSEC_CTL_RX_OPCODE_MIN_GCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_PCP 0x001E3240 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_PCP_HSEC_CTL_RX_OPCODE_MIN_PCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_PCP_HSEC_CTL_RX_OPCODE_MIN_PCP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_MIN_PCP_HSEC_CTL_RX_OPCODE_MIN_PCP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_PPP 0x001E3350 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_PPP_HSEC_CTL_RX_OPCODE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_OPCODE_PPP_HSEC_CTL_RX_OPCODE_PPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_OPCODE_PPP_HSEC_CTL_RX_OPCODE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1 0x001E31E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_HSEC_CTL_RX_PAUSE_DA_MCAST_PART1_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2 0x001E31F0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_HSEC_CTL_RX_PAUSE_DA_MCAST_PART2_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART1 0x001E3490 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART1_HSEC_CTL_TX_DA_GPP_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART1_HSEC_CTL_TX_DA_GPP_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART1_HSEC_CTL_TX_DA_GPP_PART1_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART2 0x001E34A0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART2_HSEC_CTL_TX_DA_GPP_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART2_HSEC_CTL_TX_DA_GPP_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_DA_GPP_PART2_HSEC_CTL_TX_DA_GPP_PART2_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART1 0x001E34F0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART1_HSEC_CTL_TX_DA_PPP_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART1_HSEC_CTL_TX_DA_PPP_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART1_HSEC_CTL_TX_DA_PPP_PART1_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART2 0x001E3500 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART2_HSEC_CTL_TX_DA_PPP_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART2_HSEC_CTL_TX_DA_PPP_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_DA_PPP_PART2_HSEC_CTL_TX_DA_PPP_PART2_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_ENABLE 0x001E3000 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_ENABLE_HSEC_CTL_TX_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_ENABLE_HSEC_CTL_TX_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_ENABLE_HSEC_CTL_TX_ENABLE_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_ERR_PKT_MODE 0x001E3060 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_ERR_PKT_MODE_HSEC_CTL_TX_ERR_PKT_MODE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_ERR_PKT_MODE_HSEC_CTL_TX_ERR_PKT_MODE_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_ERR_PKT_MODE_HSEC_CTL_TX_ERR_PKT_MODE_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_GPP 0x001E34D0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_GPP_HSEC_CTL_TX_ETHERTYPE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_GPP_HSEC_CTL_TX_ETHERTYPE_GPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_GPP_HSEC_CTL_TX_ETHERTYPE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_PPP 0x001E3510 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_PPP_HSEC_CTL_TX_ETHERTYPE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_PPP_HSEC_CTL_TX_ETHERTYPE_PPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_ETHERTYPE_PPP_HSEC_CTL_TX_ETHERTYPE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_INS_ENABLE 0x001E3020 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_INS_ENABLE_TX_FCS_INS_EN_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_INS_ENABLE_TX_FCS_INS_EN_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_FCS_INS_ENABLE_TX_FCS_INS_EN_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_STOMP 0x001E3030 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_STOMP_HSEC_CTL_TX_FCS_STOMP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_FCS_STOMP_HSEC_CTL_TX_FCS_STOMP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_FCS_STOMP_HSEC_CTL_TX_FCS_STOMP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_IGNORE_FCS 0x001E3040 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_IGNORE_FCS_HSEC_CTL_TX_IGNORE_FCS_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_IGNORE_FCS_HSEC_CTL_TX_IGNORE_FCS_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_IGNORE_FCS_HSEC_CTL_TX_IGNORE_FCS_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_GPP 0x001E34E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_GPP_HSEC_CTL_TX_OPCODE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_GPP_HSEC_CTL_TX_OPCODE_GPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_OPCODE_GPP_HSEC_CTL_TX_OPCODE_GPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_PPP 0x001E3520 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_PPP_HSEC_CTL_TX_OPCODE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_OPCODE_PPP_HSEC_CTL_TX_OPCODE_PPP_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_OPCODE_PPP_HSEC_CTL_TX_OPCODE_PPP_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_RDYOUT_THRESH 0x001E3010 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_RDYOUT_THRESH_HSEC_CTL_TX_RDYOUT_THRESH_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_RDYOUT_THRESH_HSEC_CTL_TX_RDYOUT_THRESH_MASK I40E_MASK(0xF, I40E_PRTMAC_HSEC_CTL_TX_RDYOUT_THRESH_HSEC_CTL_TX_RDYOUT_THRESH_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_TX_TO_RX_LOOPBACK 0x001E3050 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_TO_RX_LOOPBACK_HSEC_CTL_TX_TO_RX_LOOPBACK_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_TO_RX_LOOPBACK_HSEC_CTL_TX_TO_RX_LOOPBACK_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_TX_TO_RX_LOOPBACK_HSEC_CTL_TX_TO_RX_LOOPBACK_SHIFT) + +#define I40E_PRTMAC_HSEC_CTL_XLGMII 0x001E3550 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LB_PHY_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LB_PHY_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_LB_PHY_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_HI_TH_SHIFT 1 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_HI_TH_MASK I40E_MASK(0xF, I40E_PRTMAC_HSEC_CTL_XLGMII_HI_TH_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LO_TH_SHIFT 5 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LO_TH_MASK I40E_MASK(0xF, I40E_PRTMAC_HSEC_CTL_XLGMII_LO_TH_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_CTL_SHIFT 9 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_CTL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_CTL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_DAT_SHIFT 10 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_DAT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWP_DAT_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWZL_DAT_SHIFT 11 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWZL_DAT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_RX_SWZL_DAT_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_CTL_SHIFT 12 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_CTL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_CTL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_DAT_SHIFT 13 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_DAT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWP_DAT_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWZL_DAT_SHIFT 14 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWZL_DAT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_TX_SWZL_DAT_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_BYP_INP_SHIFT 15 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_RX_BYP_INP_MASK I40E_MASK(0x3, I40E_PRTMAC_HSEC_CTL_XLGMII_RX_BYP_INP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LB2TX_SHIFT 17 +#define I40E_PRTMAC_HSEC_CTL_XLGMII_LB2TX_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_XLGMII_LB2TX_SHIFT) + +#define I40E_PRTMAC_HSEC_SINGLE_40G_PORT_SELECT 0x001E3540 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_SINGLE_40G_PORT_SELECT_MAC_SINGLE_40G_PORT_SELECT_SHIFT 0 +#define I40E_PRTMAC_HSEC_SINGLE_40G_PORT_SELECT_MAC_SINGLE_40G_PORT_SELECT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_SINGLE_40G_PORT_SELECT_MAC_SINGLE_40G_PORT_SELECT_SHIFT) + +#define I40E_PRTMAC_HSECTL1 0x001E3560 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSECTL1_DROP_US_PKTS_SHIFT 0 +#define I40E_PRTMAC_HSECTL1_DROP_US_PKTS_MASK I40E_MASK(0x1, I40E_PRTMAC_HSECTL1_DROP_US_PKTS_SHIFT) +#define I40E_PRTMAC_HSECTL1_PAD_US_PKT_SHIFT 3 +#define I40E_PRTMAC_HSECTL1_PAD_US_PKT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSECTL1_PAD_US_PKT_SHIFT) +#define I40E_PRTMAC_HSECTL1_TX_HYSTERESIS_SHIFT 4 +#define I40E_PRTMAC_HSECTL1_TX_HYSTERESIS_MASK I40E_MASK(0x7, I40E_PRTMAC_HSECTL1_TX_HYSTERESIS_SHIFT) +#define I40E_PRTMAC_HSECTL1_HYS_FLUSH_PKT_SHIFT 7 +#define I40E_PRTMAC_HSECTL1_HYS_FLUSH_PKT_MASK I40E_MASK(0x1, I40E_PRTMAC_HSECTL1_HYS_FLUSH_PKT_SHIFT) +#define I40E_PRTMAC_HSECTL1_EN_SFD_CHECK_SHIFT 30 +#define I40E_PRTMAC_HSECTL1_EN_SFD_CHECK_MASK I40E_MASK(0x1, I40E_PRTMAC_HSECTL1_EN_SFD_CHECK_SHIFT) +#define I40E_PRTMAC_HSECTL1_EN_PREAMBLE_CHECK_SHIFT 31 +#define I40E_PRTMAC_HSECTL1_EN_PREAMBLE_CHECK_MASK I40E_MASK(0x1, I40E_PRTMAC_HSECTL1_EN_PREAMBLE_CHECK_SHIFT) + +#define I40E_PRTMAC_LINKSTA 0x001E2420 /* Reset: GLOBR */ +#define I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_EMPTY_SHIFT 0 +#define I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_EMPTY_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_EMPTY_SHIFT) +#define I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_FULL_SHIFT 1 +#define I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_FULL_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_FIFO_MTAR_STS_RX_FULL_SHIFT) +#define I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_RF_SHIFT 2 +#define I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_RF_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_RF_SHIFT) +#define I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_LF_SHIFT 3 +#define I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_LF_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_MAC_RX_LINK_FAULT_LF_SHIFT) +#define I40E_PRTMAC_LINKSTA_MAC_LINK_UP_PREV_SHIFT 7 +#define I40E_PRTMAC_LINKSTA_MAC_LINK_UP_PREV_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_MAC_LINK_UP_PREV_SHIFT) +#define I40E_PRTMAC_LINKSTA_MAC_LINK_SPEED_SHIFT 27 +#define I40E_PRTMAC_LINKSTA_MAC_LINK_SPEED_MASK I40E_MASK(0x7, I40E_PRTMAC_LINKSTA_MAC_LINK_SPEED_SHIFT) +#define I40E_PRTMAC_LINKSTA_MAC_LINK_UP_SHIFT 30 +#define I40E_PRTMAC_LINKSTA_MAC_LINK_UP_MASK I40E_MASK(0x1, I40E_PRTMAC_LINKSTA_MAC_LINK_UP_SHIFT) + +#define I40E_PRTMAC_MACC 0x001E24E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_MACC_FORCE_LINK_SHIFT 0 +#define I40E_PRTMAC_MACC_FORCE_LINK_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_FORCE_LINK_SHIFT) +#define I40E_PRTMAC_MACC_PHY_LOOP_BACK_SHIFT 1 +#define I40E_PRTMAC_MACC_PHY_LOOP_BACK_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_PHY_LOOP_BACK_SHIFT) +#define I40E_PRTMAC_MACC_TX_SWIZZLE_DATA_SHIFT 2 +#define I40E_PRTMAC_MACC_TX_SWIZZLE_DATA_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_TX_SWIZZLE_DATA_SHIFT) +#define I40E_PRTMAC_MACC_TX_SWAP_DATA_SHIFT 3 +#define I40E_PRTMAC_MACC_TX_SWAP_DATA_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_TX_SWAP_DATA_SHIFT) +#define I40E_PRTMAC_MACC_TX_SWAP_CTRL_SHIFT 4 +#define I40E_PRTMAC_MACC_TX_SWAP_CTRL_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_TX_SWAP_CTRL_SHIFT) +#define I40E_PRTMAC_MACC_RX_SWIZZLE_DATA_SHIFT 5 +#define I40E_PRTMAC_MACC_RX_SWIZZLE_DATA_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_RX_SWIZZLE_DATA_SHIFT) +#define I40E_PRTMAC_MACC_RX_SWAP_DATA_SHIFT 6 +#define I40E_PRTMAC_MACC_RX_SWAP_DATA_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_RX_SWAP_DATA_SHIFT) +#define I40E_PRTMAC_MACC_RX_SWAP_CTRL_SHIFT 7 +#define I40E_PRTMAC_MACC_RX_SWAP_CTRL_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_RX_SWAP_CTRL_SHIFT) +#define I40E_PRTMAC_MACC_FIFO_THRSHLD_HI_SHIFT 8 +#define I40E_PRTMAC_MACC_FIFO_THRSHLD_HI_MASK I40E_MASK(0xF, I40E_PRTMAC_MACC_FIFO_THRSHLD_HI_SHIFT) +#define I40E_PRTMAC_MACC_FIFO_THRSHLD_LO_SHIFT 12 +#define I40E_PRTMAC_MACC_FIFO_THRSHLD_LO_MASK I40E_MASK(0xF, I40E_PRTMAC_MACC_FIFO_THRSHLD_LO_SHIFT) +#define I40E_PRTMAC_MACC_LEGACY_RSRVD_SHIFT 16 +#define I40E_PRTMAC_MACC_LEGACY_RSRVD_MASK I40E_MASK(0x7, I40E_PRTMAC_MACC_LEGACY_RSRVD_SHIFT) +#define I40E_PRTMAC_MACC_MASK_FAULT_STATE_SHIFT 19 +#define I40E_PRTMAC_MACC_MASK_FAULT_STATE_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_MASK_FAULT_STATE_SHIFT) +#define I40E_PRTMAC_MACC_MASK_XGMII_IF_SHIFT 20 +#define I40E_PRTMAC_MACC_MASK_XGMII_IF_MASK I40E_MASK(0x3, I40E_PRTMAC_MACC_MASK_XGMII_IF_SHIFT) +#define I40E_PRTMAC_MACC_MASK_LINK_SHIFT 22 +#define I40E_PRTMAC_MACC_MASK_LINK_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_MASK_LINK_SHIFT) +#define I40E_PRTMAC_MACC_FORCE_SPEED_VALUE_SHIFT 23 +#define I40E_PRTMAC_MACC_FORCE_SPEED_VALUE_MASK I40E_MASK(0x7, I40E_PRTMAC_MACC_FORCE_SPEED_VALUE_SHIFT) +#define I40E_PRTMAC_MACC_FORCE_SPEED_EN_SHIFT 26 +#define I40E_PRTMAC_MACC_FORCE_SPEED_EN_MASK I40E_MASK(0x1, I40E_PRTMAC_MACC_FORCE_SPEED_EN_SHIFT) + +#define I40E_PRTMAC_PAP 0x001E2040 /* Reset: GLOBR */ +#define I40E_PRTMAC_PAP_TXPAUSECNT_SHIFT 0 +#define I40E_PRTMAC_PAP_TXPAUSECNT_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_PAP_TXPAUSECNT_SHIFT) +#define I40E_PRTMAC_PAP_PACE_SHIFT 16 +#define I40E_PRTMAC_PAP_PACE_MASK I40E_MASK(0xF, I40E_PRTMAC_PAP_PACE_SHIFT) + +#define I40E_PRTMAC_PCS_AN_CONTROL1 0x0008C600 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANACK2_SHIFT 1 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANACK2_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_ANACK2_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANSF_SHIFT 2 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANSF_MASK I40E_MASK(0x1F, I40E_PRTMAC_PCS_AN_CONTROL1_ANSF_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_D10GMP_SHIFT 10 +#define I40E_PRTMAC_PCS_AN_CONTROL1_D10GMP_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_D10GMP_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_RATD_SHIFT 11 +#define I40E_PRTMAC_PCS_AN_CONTROL1_RATD_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_RATD_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXAT_SHIFT 19 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXAT_MASK I40E_MASK(0xF, I40E_PRTMAC_PCS_AN_CONTROL1_ANRXAT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXDM_SHIFT 23 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXDM_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_ANRXDM_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXLM_SHIFT 24 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANRXLM_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_ANRXLM_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANPDT_SHIFT 25 +#define I40E_PRTMAC_PCS_AN_CONTROL1_ANPDT_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_AN_CONTROL1_ANPDT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_RF_SHIFT 27 +#define I40E_PRTMAC_PCS_AN_CONTROL1_RF_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL1_RF_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL1_PB_SHIFT 28 +#define I40E_PRTMAC_PCS_AN_CONTROL1_PB_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_AN_CONTROL1_PB_SHIFT) + +#define I40E_PRTMAC_PCS_AN_CONTROL2 0x0008C620 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_AN_CONTROL2_AN_PAGE_D_LOW_OVRD_SHIFT 0 +#define I40E_PRTMAC_PCS_AN_CONTROL2_AN_PAGE_D_LOW_OVRD_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_PCS_AN_CONTROL2_AN_PAGE_D_LOW_OVRD_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_RSVD_SHIFT 16 +#define I40E_PRTMAC_PCS_AN_CONTROL2_RSVD_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_AN_CONTROL2_RSVD_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_DDPT_SHIFT 18 +#define I40E_PRTMAC_PCS_AN_CONTROL2_DDPT_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_DDPT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_AAPLL_SHIFT 19 +#define I40E_PRTMAC_PCS_AN_CONTROL2_AAPLL_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_AAPLL_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_ANAPO_SHIFT 20 +#define I40E_PRTMAC_PCS_AN_CONTROL2_ANAPO_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_AN_CONTROL2_ANAPO_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_LH1GSI_SHIFT 22 +#define I40E_PRTMAC_PCS_AN_CONTROL2_LH1GSI_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_LH1GSI_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_LH1GAI_SHIFT 23 +#define I40E_PRTMAC_PCS_AN_CONTROL2_LH1GAI_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_LH1GAI_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_FANAS_SHIFT 24 +#define I40E_PRTMAC_PCS_AN_CONTROL2_FANAS_MASK I40E_MASK(0xF, I40E_PRTMAC_PCS_AN_CONTROL2_FANAS_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_FASM_SHIFT 28 +#define I40E_PRTMAC_PCS_AN_CONTROL2_FASM_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_AN_CONTROL2_FASM_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_PDD_SHIFT 30 +#define I40E_PRTMAC_PCS_AN_CONTROL2_PDD_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_PDD_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL2_FEC_FORCE_SHIFT 31 +#define I40E_PRTMAC_PCS_AN_CONTROL2_FEC_FORCE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL2_FEC_FORCE_SHIFT) + +#define I40E_PRTMAC_PCS_AN_CONTROL4 0x0008C660 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED0_SHIFT 0 +#define I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED0_MASK I40E_MASK(0x1FFF, I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED0_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_VALUE_SHIFT 13 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_VALUE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_VALUE_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_RESULT_SHIFT 14 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_RESULT_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KR_EEE_AN_RESULT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_VALUE_SHIFT 15 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_VALUE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_VALUE_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_RESULT_SHIFT 16 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_RESULT_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX4_EEE_AN_RESULT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_VALUE_SHIFT 17 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_VALUE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_VALUE_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_RESULT_SHIFT 18 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_RESULT_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_KX_EEE_AN_RESULT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_VALUE_SHIFT 19 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_VALUE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_VALUE_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_RESULT_SHIFT 20 +#define I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_RESULT_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_AN_CONTROL4_FORCE_SGMII_EEE_AN_RESULT_SHIFT) +#define I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED1_SHIFT 21 +#define I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED1_MASK I40E_MASK(0x7FF, I40E_PRTMAC_PCS_AN_CONTROL4_RESERVED1_SHIFT) + +#define I40E_PRTMAC_PCS_LINK_CTRL 0x0008C260 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_40G_R_TYPE_SELECTION_SHIFT 0 +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_40G_R_TYPE_SELECTION_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_LINK_CTRL_PMD_40G_R_TYPE_SELECTION_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_R_TYPE_SELECTION_SHIFT 2 +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_R_TYPE_SELECTION_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_R_TYPE_SELECTION_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_X_TYPE_SELECTION_SHIFT 4 +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_X_TYPE_SELECTION_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_LINK_CTRL_PMD_10G_X_TYPE_SELECTION_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION0_SHIFT 6 +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION0_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION0_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_SPEED_SELECTION_SHIFT 8 +#define I40E_PRTMAC_PCS_LINK_CTRL_SPEED_SELECTION_MASK I40E_MASK(0x7, I40E_PRTMAC_PCS_LINK_CTRL_SPEED_SELECTION_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION1_SHIFT 12 +#define I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION1_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_PMD_1G_X_TYPE_SELECTION1_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_AN_CLAUSE_37_ENABLE_SHIFT 13 +#define I40E_PRTMAC_PCS_LINK_CTRL_AN_CLAUSE_37_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_AN_CLAUSE_37_ENABLE_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_FEC_CAPABILITY_SHIFT 14 +#define I40E_PRTMAC_PCS_LINK_CTRL_FEC_CAPABILITY_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_LINK_CTRL_FEC_CAPABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KX_ABILITY_SHIFT 16 +#define I40E_PRTMAC_PCS_LINK_CTRL_KX_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KX_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KX4_ABILITY_SHIFT 17 +#define I40E_PRTMAC_PCS_LINK_CTRL_KX4_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KX4_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KR_ABILITY_SHIFT 18 +#define I40E_PRTMAC_PCS_LINK_CTRL_KR_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KR_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KR4_ABILITY_SHIFT 19 +#define I40E_PRTMAC_PCS_LINK_CTRL_KR4_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KR4_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_CR4_ABILITY_SHIFT 20 +#define I40E_PRTMAC_PCS_LINK_CTRL_CR4_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_CR4_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KR2_ABILITY_SHIFT 21 +#define I40E_PRTMAC_PCS_LINK_CTRL_KR2_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KR2_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_POWER_DOWN_SHIFT 23 +#define I40E_PRTMAC_PCS_LINK_CTRL_POWER_DOWN_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_POWER_DOWN_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KX_EEE_ABILITY_SHIFT 24 +#define I40E_PRTMAC_PCS_LINK_CTRL_KX_EEE_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KX_EEE_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KX4_EEE_ABILITY_SHIFT 25 +#define I40E_PRTMAC_PCS_LINK_CTRL_KX4_EEE_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KX4_EEE_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_KR_EEE_ABILITY_SHIFT 26 +#define I40E_PRTMAC_PCS_LINK_CTRL_KR_EEE_ABILITY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_KR_EEE_ABILITY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_AUTO_NEG_ENABLE_SHIFT 29 +#define I40E_PRTMAC_PCS_LINK_CTRL_AUTO_NEG_ENABLE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_AUTO_NEG_ENABLE_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_FORCE_LINK_UP_SHIFT 30 +#define I40E_PRTMAC_PCS_LINK_CTRL_FORCE_LINK_UP_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_FORCE_LINK_UP_SHIFT) +#define I40E_PRTMAC_PCS_LINK_CTRL_RESTART_AUTO_NEG_SHIFT 31 +#define I40E_PRTMAC_PCS_LINK_CTRL_RESTART_AUTO_NEG_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_CTRL_RESTART_AUTO_NEG_SHIFT) + +#define I40E_PRTMAC_PCS_LINK_STATUS1 0x0008C200 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_1G_MODE_SHIFT 5 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_1G_MODE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_1G_MODE_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_0_SHIFT 6 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_0_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_0_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_1_SHIFT 7 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_1_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_1_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_2_SHIFT 8 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_2_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_2_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_3_SHIFT 9 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_3_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_LANE_3_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_COMBINED_SHIFT 10 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_COMBINED_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SIGNAL_DETECTED_COMBINED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE0_SHIFT 11 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE0_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE0_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE1_SHIFT 12 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE1_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE1_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE2_SHIFT 13 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE2_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE2_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE3_SHIFT 14 +#define I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE3_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_SYNCH_LANE3_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_AN_SHIFT 15 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_AN_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_AN_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_SHIFT 16 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1000_BASE_X_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_X4_PARALLEL_SHIFT 17 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_X4_PARALLEL_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_X4_PARALLEL_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_FEC_10G_ENABLED_SHIFT 18 +#define I40E_PRTMAC_PCS_LINK_STATUS1_FEC_10G_ENABLED_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_FEC_10G_ENABLED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_40G_BASE_R4_SHIFT 19 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_40G_BASE_R4_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_40G_BASE_R4_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_R1_SHIFT 20 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_R1_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_10G_BASE_R1_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1G_SGMII_SHIFT 21 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1G_SGMII_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_1G_SGMII_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_MODE_SHIFT 22 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_MODE_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_MODE_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_SPEED_SHIFT 24 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_SPEED_MASK I40E_MASK(0x7, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_SPEED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_UP_SHIFT 27 +#define I40E_PRTMAC_PCS_LINK_STATUS1_LINK_UP_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_LINK_UP_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_AN_COMPLETED_SHIFT 28 +#define I40E_PRTMAC_PCS_LINK_STATUS1_AN_COMPLETED_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_AN_COMPLETED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_PCS_READY_SHIFT 29 +#define I40E_PRTMAC_PCS_LINK_STATUS1_PCS_READY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_PCS_READY_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS1_MAC_READY_SHIFT 30 +#define I40E_PRTMAC_PCS_LINK_STATUS1_MAC_READY_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS1_MAC_READY_SHIFT) + +#define I40E_PRTMAC_PCS_LINK_STATUS2 0x0008C220 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_LINK_STATUS2_SIGNAL_DETECTED_FEC_SHIFT 1 +#define I40E_PRTMAC_PCS_LINK_STATUS2_SIGNAL_DETECTED_FEC_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_SIGNAL_DETECTED_FEC_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_FEC_BLOCK_LOCK_SHIFT 2 +#define I40E_PRTMAC_PCS_LINK_STATUS2_FEC_BLOCK_LOCK_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_FEC_BLOCK_LOCK_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_KR_HI_BERR_SHIFT 3 +#define I40E_PRTMAC_PCS_LINK_STATUS2_KR_HI_BERR_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_KR_HI_BERR_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_KR_10G_PCS_LCOK_SHIFT 4 +#define I40E_PRTMAC_PCS_LINK_STATUS2_KR_10G_PCS_LCOK_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_KR_10G_PCS_LCOK_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_AN_NEXT_PAGE_RECEIVED_SHIFT 5 +#define I40E_PRTMAC_PCS_LINK_STATUS2_AN_NEXT_PAGE_RECEIVED_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_AN_NEXT_PAGE_RECEIVED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_AN_PAGE_RECEIVED_SHIFT 6 +#define I40E_PRTMAC_PCS_LINK_STATUS2_AN_PAGE_RECEIVED_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_AN_PAGE_RECEIVED_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_LINK_STATUS_SHIFT 7 +#define I40E_PRTMAC_PCS_LINK_STATUS2_LINK_STATUS_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_LINK_STATUS_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_10G_SHIFT 17 +#define I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_10G_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_10G_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_1G_SHIFT 18 +#define I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_1G_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_ALIGNMENT_STATUS_1G_SHIFT) +#define I40E_PRTMAC_PCS_LINK_STATUS2_BP_AN_RECEIVER_IDLE_SHIFT 19 +#define I40E_PRTMAC_PCS_LINK_STATUS2_BP_AN_RECEIVER_IDLE_MASK I40E_MASK(0x1, I40E_PRTMAC_PCS_LINK_STATUS2_BP_AN_RECEIVER_IDLE_SHIFT) + +#define I40E_PRTMAC_PCS_MUX_KR 0x0008C000 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_MUX_KR_PCS_MUX_KR_SHIFT 0 +#define I40E_PRTMAC_PCS_MUX_KR_PCS_MUX_KR_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_PCS_MUX_KR_PCS_MUX_KR_SHIFT) + +#define I40E_PRTMAC_PCS_MUX_KX 0x0008C008 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_MUX_KX_PCS_MUX_KX_SHIFT 0 +#define I40E_PRTMAC_PCS_MUX_KX_PCS_MUX_KX_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_PCS_MUX_KX_PCS_MUX_KX_SHIFT) + +#define I40E_PRTMAC_PHY_ANA_ADD 0x000A4038 /* Reset: GLOBR */ +#define I40E_PRTMAC_PHY_ANA_ADD_ADDRESS_SHIFT 0 +#define I40E_PRTMAC_PHY_ANA_ADD_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_PHY_ANA_ADD_ADDRESS_SHIFT) +#define I40E_PRTMAC_PHY_ANA_ADD_BYTE_EN_SHIFT 28 +#define I40E_PRTMAC_PHY_ANA_ADD_BYTE_EN_MASK I40E_MASK(0xF, I40E_PRTMAC_PHY_ANA_ADD_BYTE_EN_SHIFT) + +#define I40E_PRTMAC_PHY_ANA_DATA 0x000A403c /* Reset: GLOBR */ +#define I40E_PRTMAC_PHY_ANA_DATA_DATA_SHIFT 0 +#define I40E_PRTMAC_PHY_ANA_DATA_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_PHY_ANA_DATA_DATA_SHIFT) + +#define I40E_PRTMAC_PMD_MUX_KR 0x0008C004 /* Reset: GLOBR */ +#define I40E_PRTMAC_PMD_MUX_KR_PMD_MUX_KR_SHIFT 0 +#define I40E_PRTMAC_PMD_MUX_KR_PMD_MUX_KR_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_PMD_MUX_KR_PMD_MUX_KR_SHIFT) + +#define I40E_PRTMAC_PMD_MUX_KX 0x0008C00C /* Reset: GLOBR */ +#define I40E_PRTMAC_PMD_MUX_KX_PMD_MUX_KX_SHIFT 0 +#define I40E_PRTMAC_PMD_MUX_KX_PMD_MUX_KX_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_PMD_MUX_KX_PMD_MUX_KX_SHIFT) + +#define I40E_PRTMAC_TREG 0x001E2160 /* Reset: GLOBR */ +#define I40E_PRTMAC_TREG_ILGLCODETXERRTST_SHIFT 0 +#define I40E_PRTMAC_TREG_ILGLCODETXERRTST_MASK I40E_MASK(0xFF, I40E_PRTMAC_TREG_ILGLCODETXERRTST_SHIFT) +#define I40E_PRTMAC_TREG_CTRLTXERRTST_SHIFT 8 +#define I40E_PRTMAC_TREG_CTRLTXERRTST_MASK I40E_MASK(0x1, I40E_PRTMAC_TREG_CTRLTXERRTST_SHIFT) +#define I40E_PRTMAC_TREG_TXXGMIITSTMODE_SHIFT 9 +#define I40E_PRTMAC_TREG_TXXGMIITSTMODE_MASK I40E_MASK(0x1, I40E_PRTMAC_TREG_TXXGMIITSTMODE_SHIFT) +#define I40E_PRTMAC_TREG_BUSYIDLCODE_SHIFT 15 +#define I40E_PRTMAC_TREG_BUSYIDLCODE_MASK I40E_MASK(0xFF, I40E_PRTMAC_TREG_BUSYIDLCODE_SHIFT) +#define I40E_PRTMAC_TREG_BUSYIDLEN_SHIFT 23 +#define I40E_PRTMAC_TREG_BUSYIDLEN_MASK I40E_MASK(0x1, I40E_PRTMAC_TREG_BUSYIDLEN_SHIFT) + +/* PF - Manageability Registers */ + +#define I40E_EMP_TCO_ISOLATE 0x00078E80 /* Reset: POR */ +#define I40E_EMP_TCO_ISOLATE_EMP_TCO_ISOLATE_SHIFT 0 +#define I40E_EMP_TCO_ISOLATE_EMP_TCO_ISOLATE_MASK I40E_MASK(0xFFFF, I40E_EMP_TCO_ISOLATE_EMP_TCO_ISOLATE_SHIFT) + +#define I40E_GL_MNG_FRIACR 0x00083240 /* Reset: EMPR */ +#define I40E_GL_MNG_FRIACR_ADDR_SHIFT 0 +#define I40E_GL_MNG_FRIACR_ADDR_MASK I40E_MASK(0x1FFFFF, I40E_GL_MNG_FRIACR_ADDR_SHIFT) +#define I40E_GL_MNG_FRIACR_WR_SHIFT 24 +#define I40E_GL_MNG_FRIACR_WR_MASK I40E_MASK(0x1, I40E_GL_MNG_FRIACR_WR_SHIFT) +#define I40E_GL_MNG_FRIACR_RD_SHIFT 25 +#define I40E_GL_MNG_FRIACR_RD_MASK I40E_MASK(0x1, I40E_GL_MNG_FRIACR_RD_SHIFT) + +#define I40E_GL_MNG_FRIARDR 0x00083248 /* Reset: EMPR */ +#define I40E_GL_MNG_FRIARDR_RDATA_SHIFT 0 +#define I40E_GL_MNG_FRIARDR_RDATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_MNG_FRIARDR_RDATA_SHIFT) + +#define I40E_GL_MNG_FRIARR 0x0008324C /* Reset: EMPR */ +#define I40E_GL_MNG_FRIARR_HALT_SHIFT 0 +#define I40E_GL_MNG_FRIARR_HALT_MASK I40E_MASK(0x1, I40E_GL_MNG_FRIARR_HALT_SHIFT) +#define I40E_GL_MNG_FRIARR_RST_EN_SHIFT 1 +#define I40E_GL_MNG_FRIARR_RST_EN_MASK I40E_MASK(0x1, I40E_GL_MNG_FRIARR_RST_EN_SHIFT) + +#define I40E_GL_MNG_FRIAWDR 0x00083244 /* Reset: EMPR */ +#define I40E_GL_MNG_FRIAWDR_WDATA_SHIFT 0 +#define I40E_GL_MNG_FRIAWDR_WDATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_MNG_FRIAWDR_WDATA_SHIFT) + +#define I40E_GL_MNG_RRDFM 0x00083040 /* Reset: EMPR */ +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_0_SHIFT 0 +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_0_MASK I40E_MASK(0x1, I40E_GL_MNG_RRDFM_RMII_DBG_FIL_0_SHIFT) +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_1_SHIFT 1 +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_1_MASK I40E_MASK(0x1, I40E_GL_MNG_RRDFM_RMII_DBG_FIL_1_SHIFT) +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_2_SHIFT 2 +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_2_MASK I40E_MASK(0x1, I40E_GL_MNG_RRDFM_RMII_DBG_FIL_2_SHIFT) +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_3_SHIFT 3 +#define I40E_GL_MNG_RRDFM_RMII_DBG_FIL_3_MASK I40E_MASK(0x1, I40E_GL_MNG_RRDFM_RMII_DBG_FIL_3_SHIFT) + +#define I40E_GL_SWR_PL_THR 0x00269FDC /* Reset: CORER */ +#define I40E_GL_SWR_PL_THR_PIPE_LIMIT_SHIFT 0 +#define I40E_GL_SWR_PL_THR_PIPE_LIMIT_MASK I40E_MASK(0xFF, I40E_GL_SWR_PL_THR_PIPE_LIMIT_SHIFT) + +#define I40E_GL_SWR_PM_UP_THR 0x00269FBC /* Reset: CORER */ +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_0_SHIFT 0 +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_0_MASK I40E_MASK(0xFF, I40E_GL_SWR_PM_UP_THR_UP_PORT_0_SHIFT) +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_1_SHIFT 8 +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_1_MASK I40E_MASK(0xFF, I40E_GL_SWR_PM_UP_THR_UP_PORT_1_SHIFT) +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_2_SHIFT 16 +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_2_MASK I40E_MASK(0xFF, I40E_GL_SWR_PM_UP_THR_UP_PORT_2_SHIFT) +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_3_SHIFT 24 +#define I40E_GL_SWR_PM_UP_THR_UP_PORT_3_MASK I40E_MASK(0xFF, I40E_GL_SWR_PM_UP_THR_UP_PORT_3_SHIFT) + +#define I40E_PRT_MNG_FTFT_IGNORETAGS 0x00085280 /* Reset: POR */ +#define I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_0_SHIFT 0 +#define I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_0_MASK I40E_MASK(0x1, I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_0_SHIFT) +#define I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_SHIFT 2 +#define I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_MASK I40E_MASK(0xFF, I40E_PRT_MNG_FTFT_IGNORETAGS_PRT_MNG_FTFT_IGNORETAGS_SHIFT) + +/* PF - MSI-X Table Registers */ + +/* PF - NVM Registers */ + +#define I40E_EMPNVM_FLCNT 0x000B6128 /* Reset: POR */ +#define I40E_EMPNVM_FLCNT_RDCNT_SHIFT 0 +#define I40E_EMPNVM_FLCNT_RDCNT_MASK I40E_MASK(0x1FFFFFF, I40E_EMPNVM_FLCNT_RDCNT_SHIFT) +#define I40E_EMPNVM_FLCNT_ABORT_SHIFT 31 +#define I40E_EMPNVM_FLCNT_ABORT_MASK I40E_MASK(0x1, I40E_EMPNVM_FLCNT_ABORT_SHIFT) + +#define I40E_EMPNVM_FLCTL 0x000B6120 /* Reset: POR */ +#define I40E_EMPNVM_FLCTL_ADDR_SHIFT 0 +#define I40E_EMPNVM_FLCTL_ADDR_MASK I40E_MASK(0xFFFFFF, I40E_EMPNVM_FLCTL_ADDR_SHIFT) +#define I40E_EMPNVM_FLCTL_CMD_SHIFT 24 +#define I40E_EMPNVM_FLCTL_CMD_MASK I40E_MASK(0x3, I40E_EMPNVM_FLCTL_CMD_SHIFT) +#define I40E_EMPNVM_FLCTL_CMDV_SHIFT 26 +#define I40E_EMPNVM_FLCTL_CMDV_MASK I40E_MASK(0x1, I40E_EMPNVM_FLCTL_CMDV_SHIFT) +#define I40E_EMPNVM_FLCTL_FLBUSY_SHIFT 27 +#define I40E_EMPNVM_FLCTL_FLBUSY_MASK I40E_MASK(0x1, I40E_EMPNVM_FLCTL_FLBUSY_SHIFT) +#define I40E_EMPNVM_FLCTL_DONE_SHIFT 30 +#define I40E_EMPNVM_FLCTL_DONE_MASK I40E_MASK(0x1, I40E_EMPNVM_FLCTL_DONE_SHIFT) +#define I40E_EMPNVM_FLCTL_GLDONE_SHIFT 31 +#define I40E_EMPNVM_FLCTL_GLDONE_MASK I40E_MASK(0x1, I40E_EMPNVM_FLCTL_GLDONE_SHIFT) + +#define I40E_EMPNVM_FLDATA 0x000B6124 /* Reset: POR */ +#define I40E_EMPNVM_FLDATA_FLMNGDATA_SHIFT 0 +#define I40E_EMPNVM_FLDATA_FLMNGDATA_MASK I40E_MASK(0xFFFFFFFF, I40E_EMPNVM_FLDATA_FLMNGDATA_SHIFT) + +#define I40E_EMPNVM_SRCTL 0x000B6118 /* Reset: POR */ +#define I40E_EMPNVM_SRCTL_ADDR_SHIFT 0 +#define I40E_EMPNVM_SRCTL_ADDR_MASK I40E_MASK(0x7FFF, I40E_EMPNVM_SRCTL_ADDR_SHIFT) +#define I40E_EMPNVM_SRCTL_START_SHIFT 15 +#define I40E_EMPNVM_SRCTL_START_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_START_SHIFT) +#define I40E_EMPNVM_SRCTL_WRITE_SHIFT 16 +#define I40E_EMPNVM_SRCTL_WRITE_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_WRITE_SHIFT) +#define I40E_EMPNVM_SRCTL_SRBUSY_SHIFT 17 +#define I40E_EMPNVM_SRCTL_SRBUSY_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_SRBUSY_SHIFT) +#define I40E_EMPNVM_SRCTL_TRANS_ABORTED_SHIFT 20 +#define I40E_EMPNVM_SRCTL_TRANS_ABORTED_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_TRANS_ABORTED_SHIFT) +#define I40E_EMPNVM_SRCTL_DEFERAL_SHIFT 29 +#define I40E_EMPNVM_SRCTL_DEFERAL_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_DEFERAL_SHIFT) +#define I40E_EMPNVM_SRCTL_SR_LOAD_SHIFT 30 +#define I40E_EMPNVM_SRCTL_SR_LOAD_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_SR_LOAD_SHIFT) +#define I40E_EMPNVM_SRCTL_DONE_SHIFT 31 +#define I40E_EMPNVM_SRCTL_DONE_MASK I40E_MASK(0x1, I40E_EMPNVM_SRCTL_DONE_SHIFT) + +#define I40E_EMPNVM_SRDATA 0x000B611C /* Reset: POR */ +#define I40E_EMPNVM_SRDATA_WRDATA_SHIFT 0 +#define I40E_EMPNVM_SRDATA_WRDATA_MASK I40E_MASK(0xFFFF, I40E_EMPNVM_SRDATA_WRDATA_SHIFT) +#define I40E_EMPNVM_SRDATA_RDDATA_SHIFT 16 +#define I40E_EMPNVM_SRDATA_RDDATA_MASK I40E_MASK(0xFFFF, I40E_EMPNVM_SRDATA_RDDATA_SHIFT) + +#define I40E_GLNVM_ALTIMERS 0x000B6140 /* Reset: POR */ +#define I40E_GLNVM_ALTIMERS_PCI_ALTIMER_SHIFT 0 +#define I40E_GLNVM_ALTIMERS_PCI_ALTIMER_MASK I40E_MASK(0xFFF, I40E_GLNVM_ALTIMERS_PCI_ALTIMER_SHIFT) +#define I40E_GLNVM_ALTIMERS_GEN_ALTIMER_SHIFT 12 +#define I40E_GLNVM_ALTIMERS_GEN_ALTIMER_MASK I40E_MASK(0xFFFFF, I40E_GLNVM_ALTIMERS_GEN_ALTIMER_SHIFT) + +#define I40E_GLNVM_EMPLD 0x000B610C /* Reset: POR */ +#define I40E_GLNVM_EMPLD_EMP_CORE_DONE_SHIFT 3 +#define I40E_GLNVM_EMPLD_EMP_CORE_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_EMPLD_EMP_CORE_DONE_SHIFT) +#define I40E_GLNVM_EMPLD_EMP_GLOBAL_DONE_SHIFT 4 +#define I40E_GLNVM_EMPLD_EMP_GLOBAL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_EMPLD_EMP_GLOBAL_DONE_SHIFT) + +#define I40E_GLNVM_EMPRQ 0x000B613C /* Reset: POR */ +#define I40E_GLNVM_EMPRQ_EMP_CORE_REQD_SHIFT 3 +#define I40E_GLNVM_EMPRQ_EMP_CORE_REQD_MASK I40E_MASK(0x1, I40E_GLNVM_EMPRQ_EMP_CORE_REQD_SHIFT) +#define I40E_GLNVM_EMPRQ_EMP_GLOBAL_REQD_SHIFT 4 +#define I40E_GLNVM_EMPRQ_EMP_GLOBAL_REQD_MASK I40E_MASK(0x1, I40E_GLNVM_EMPRQ_EMP_GLOBAL_REQD_SHIFT) + +#define I40E_GLNVM_SRLD 0x000B600C /* Reset: POR */ +#define I40E_GLNVM_SRLD_HW_PCIR_DONE_SHIFT 0 +#define I40E_GLNVM_SRLD_HW_PCIR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_PCIR_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_PCIRTL_DONE_SHIFT 1 +#define I40E_GLNVM_SRLD_HW_PCIRTL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_PCIRTL_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_LCB_DONE_SHIFT 2 +#define I40E_GLNVM_SRLD_HW_LCB_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_LCB_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_CORE_DONE_SHIFT 3 +#define I40E_GLNVM_SRLD_HW_CORE_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_CORE_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_GLOBAL_DONE_SHIFT 4 +#define I40E_GLNVM_SRLD_HW_GLOBAL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_GLOBAL_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_POR_DONE_SHIFT 5 +#define I40E_GLNVM_SRLD_HW_POR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_POR_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_PCIE_ANA_DONE_SHIFT 6 +#define I40E_GLNVM_SRLD_HW_PCIE_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_PCIE_ANA_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_PHY_ANA_DONE_SHIFT 7 +#define I40E_GLNVM_SRLD_HW_PHY_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_PHY_ANA_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_EMP_DONE_SHIFT 8 +#define I40E_GLNVM_SRLD_HW_EMP_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_EMP_DONE_SHIFT) +#define I40E_GLNVM_SRLD_HW_PCIALT_DONE_SHIFT 9 +#define I40E_GLNVM_SRLD_HW_PCIALT_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRLD_HW_PCIALT_DONE_SHIFT) + +#define I40E_GLNVM_ULT 0x000B6154 /* Reset: POR */ +#define I40E_GLNVM_ULT_CONF_PCIR_AE_SHIFT 0 +#define I40E_GLNVM_ULT_CONF_PCIR_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_PCIR_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_PCIRTL_AE_SHIFT 1 +#define I40E_GLNVM_ULT_CONF_PCIRTL_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_PCIRTL_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_LCB_AE_SHIFT 2 +#define I40E_GLNVM_ULT_CONF_LCB_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_LCB_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_CORE_AE_SHIFT 3 +#define I40E_GLNVM_ULT_CONF_CORE_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_CORE_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_GLOBAL_AE_SHIFT 4 +#define I40E_GLNVM_ULT_CONF_GLOBAL_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_GLOBAL_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_POR_AE_SHIFT 5 +#define I40E_GLNVM_ULT_CONF_POR_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_POR_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_EMP_AE_SHIFT 8 +#define I40E_GLNVM_ULT_CONF_EMP_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_EMP_AE_SHIFT) +#define I40E_GLNVM_ULT_CONF_PCIALT_AE_SHIFT 9 +#define I40E_GLNVM_ULT_CONF_PCIALT_AE_MASK I40E_MASK(0x1, I40E_GLNVM_ULT_CONF_PCIALT_AE_SHIFT) + +#define I40E_MEM_INIT_GATE_AL_DONE 0x000B6004 /* Reset: POR */ +#define I40E_MEM_INIT_GATE_AL_DONE_CMLAN_INIT_DONE_GATE_AL_DONE_SHIFT 0 +#define I40E_MEM_INIT_GATE_AL_DONE_CMLAN_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_CMLAN_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_PMAT_INIT_DONE_GATE_AL_DONE_SHIFT 1 +#define I40E_MEM_INIT_GATE_AL_DONE_PMAT_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_PMAT_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_RCU_INIT_DONE_GATE_AL_DONE_SHIFT 2 +#define I40E_MEM_INIT_GATE_AL_DONE_RCU_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_RCU_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_TDPU_INIT_DONE_GATE_AL_DONE_SHIFT 3 +#define I40E_MEM_INIT_GATE_AL_DONE_TDPU_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_TDPU_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_TLAN_INIT_DONE_GATE_AL_DONE_SHIFT 4 +#define I40E_MEM_INIT_GATE_AL_DONE_TLAN_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_TLAN_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_RLAN_INIT_DONE_GATE_AL_DONE_SHIFT 5 +#define I40E_MEM_INIT_GATE_AL_DONE_RLAN_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_RLAN_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_RDPU_INIT_DONE_GATE_AL_DONE_SHIFT 6 +#define I40E_MEM_INIT_GATE_AL_DONE_RDPU_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_RDPU_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_PPRS_INIT_DONE_GATE_AL_DONE_SHIFT 7 +#define I40E_MEM_INIT_GATE_AL_DONE_PPRS_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_PPRS_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_RPB_INIT_DONE_GATE_AL_DONE_SHIFT 8 +#define I40E_MEM_INIT_GATE_AL_DONE_RPB_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_RPB_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_TPB_INIT_DONE_GATE_AL_DONE_SHIFT 9 +#define I40E_MEM_INIT_GATE_AL_DONE_TPB_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_TPB_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_FOC_INIT_DONE_GATE_AL_DONE_SHIFT 10 +#define I40E_MEM_INIT_GATE_AL_DONE_FOC_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_FOC_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_TSCD_INIT_DONE_GATE_AL_DONE_SHIFT 11 +#define I40E_MEM_INIT_GATE_AL_DONE_TSCD_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_TSCD_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_TCB_INIT_DONE_GATE_AL_DONE_SHIFT 12 +#define I40E_MEM_INIT_GATE_AL_DONE_TCB_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_TCB_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_RCB_INIT_DONE_GATE_AL_DONE_SHIFT 13 +#define I40E_MEM_INIT_GATE_AL_DONE_RCB_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_RCB_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_WUC_INIT_DONE_GATE_AL_DONE_SHIFT 14 +#define I40E_MEM_INIT_GATE_AL_DONE_WUC_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_WUC_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_STAT_INIT_DONE_GATE_AL_DONE_SHIFT 15 +#define I40E_MEM_INIT_GATE_AL_DONE_STAT_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_STAT_INIT_DONE_GATE_AL_DONE_SHIFT) +#define I40E_MEM_INIT_GATE_AL_DONE_ITR_INIT_DONE_GATE_AL_DONE_SHIFT 16 +#define I40E_MEM_INIT_GATE_AL_DONE_ITR_INIT_DONE_GATE_AL_DONE_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_DONE_ITR_INIT_DONE_GATE_AL_DONE_SHIFT) + +#define I40E_MEM_INIT_GATE_AL_STR 0x000B6000 /* Reset: POR */ +#define I40E_MEM_INIT_GATE_AL_STR_CMLAN_INIT_DONE_GATE_AL_STRT_SHIFT 0 +#define I40E_MEM_INIT_GATE_AL_STR_CMLAN_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_CMLAN_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_PMAT_INIT_DONE_GATE_AL_STRT_SHIFT 1 +#define I40E_MEM_INIT_GATE_AL_STR_PMAT_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_PMAT_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_RCU_INIT_DONE_GATE_AL_STRT_SHIFT 2 +#define I40E_MEM_INIT_GATE_AL_STR_RCU_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_RCU_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_TDPU_INIT_DONE_GATE_AL_STRT_SHIFT 3 +#define I40E_MEM_INIT_GATE_AL_STR_TDPU_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_TDPU_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_TLAN_INIT_DONE_GATE_AL_STRT_SHIFT 4 +#define I40E_MEM_INIT_GATE_AL_STR_TLAN_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_TLAN_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_RLAN_INIT_DONE_GATE_AL_STRT_SHIFT 5 +#define I40E_MEM_INIT_GATE_AL_STR_RLAN_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_RLAN_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_RDPU_INIT_DONE_GATE_AL_STRT_SHIFT 6 +#define I40E_MEM_INIT_GATE_AL_STR_RDPU_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_RDPU_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_PPRS_INIT_DONE_GATE_AL_STRT_SHIFT 7 +#define I40E_MEM_INIT_GATE_AL_STR_PPRS_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_PPRS_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_RPB_INIT_DONE_GATE_AL_STRT_SHIFT 8 +#define I40E_MEM_INIT_GATE_AL_STR_RPB_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_RPB_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_TPB_INIT_DONE_GATE_AL_STRT_SHIFT 9 +#define I40E_MEM_INIT_GATE_AL_STR_TPB_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_TPB_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_FOC_INIT_DONE_GATE_AL_STRT_SHIFT 10 +#define I40E_MEM_INIT_GATE_AL_STR_FOC_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_FOC_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_TSCD_INIT_DONE_GATE_AL_STRT_SHIFT 11 +#define I40E_MEM_INIT_GATE_AL_STR_TSCD_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_TSCD_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_TCB_INIT_DONE_GATE_AL_STRT_SHIFT 12 +#define I40E_MEM_INIT_GATE_AL_STR_TCB_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_TCB_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_RCB_INIT_DONE_GATE_AL_STRT_SHIFT 13 +#define I40E_MEM_INIT_GATE_AL_STR_RCB_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_RCB_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_WUC_INIT_DONE_GATE_AL_STRT_SHIFT 14 +#define I40E_MEM_INIT_GATE_AL_STR_WUC_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_WUC_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_STAT_INIT_DONE_GATE_AL_STRT_SHIFT 15 +#define I40E_MEM_INIT_GATE_AL_STR_STAT_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_STAT_INIT_DONE_GATE_AL_STRT_SHIFT) +#define I40E_MEM_INIT_GATE_AL_STR_ITR_INIT_DONE_GATE_AL_STRT_SHIFT 16 +#define I40E_MEM_INIT_GATE_AL_STR_ITR_INIT_DONE_GATE_AL_STRT_MASK I40E_MASK(0x1, I40E_MEM_INIT_GATE_AL_STR_ITR_INIT_DONE_GATE_AL_STRT_SHIFT) + +/* PF - PCIe Registers */ + +#define I40E_EMP_PCI_CIAA 0x0009C4D0 /* Reset: PCIR */ +#define I40E_EMP_PCI_CIAA_ADDRESS_SHIFT 0 +#define I40E_EMP_PCI_CIAA_ADDRESS_MASK I40E_MASK(0xFFF, I40E_EMP_PCI_CIAA_ADDRESS_SHIFT) +#define I40E_EMP_PCI_CIAA_FNUM_SHIFT 12 +#define I40E_EMP_PCI_CIAA_FNUM_MASK I40E_MASK(0x7F, I40E_EMP_PCI_CIAA_FNUM_SHIFT) +#define I40E_EMP_PCI_CIAA_PF_SHIFT 19 +#define I40E_EMP_PCI_CIAA_PF_MASK I40E_MASK(0x1, I40E_EMP_PCI_CIAA_PF_SHIFT) + +#define I40E_EMP_PCI_CIAD 0x0009C4D4 /* Reset: PCIR */ +#define I40E_EMP_PCI_CIAD_DATA_SHIFT 0 +#define I40E_EMP_PCI_CIAD_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_PCI_CIAD_DATA_SHIFT) + +#define I40E_GL_PCI_DBGCTL 0x000BE4F4 /* Reset: PCIR */ +#define I40E_GL_PCI_DBGCTL_CONFIG_ACCESS_ENABLE_SHIFT 0 +#define I40E_GL_PCI_DBGCTL_CONFIG_ACCESS_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PCI_DBGCTL_CONFIG_ACCESS_ENABLE_SHIFT) + +#define I40E_GLGEN_FWPFRSTAT 0x0009C4E8 /* Reset: PCIR */ +#define I40E_GLGEN_FWPFRSTAT_PF_FLR_SHIFT 0 +#define I40E_GLGEN_FWPFRSTAT_PF_FLR_MASK I40E_MASK(0xFFFF, I40E_GLGEN_FWPFRSTAT_PF_FLR_SHIFT) + +#define I40E_GLGEN_FWVFRSTAT(_i) (0x0009C4D8 + ((_i) * 4)) /* _i=0...3 */ /* Reset: PCIR */ +#define I40E_GLGEN_FWVFRSTAT_MAX_INDEX 3 +#define I40E_GLGEN_FWVFRSTAT_VF_FLR_SHIFT 0 +#define I40E_GLGEN_FWVFRSTAT_VF_FLR_MASK I40E_MASK(0xFFFFFFFF, I40E_GLGEN_FWVFRSTAT_VF_FLR_SHIFT) + +#define I40E_GLGEN_PCIFCNCNT_PCI 0x000BE4A0 /* Reset: PCIR */ +#define I40E_GLGEN_PCIFCNCNT_PCI_PCIPFCNT_SHIFT 0 +#define I40E_GLGEN_PCIFCNCNT_PCI_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCI_PCIPFCNT_SHIFT) +#define I40E_GLGEN_PCIFCNCNT_PCI_PCIVFCNT_SHIFT 16 +#define I40E_GLGEN_PCIFCNCNT_PCI_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCI_PCIVFCNT_SHIFT) + +#define I40E_GLPCI_ANA_ADD 0x000BA000 /* Reset: POR */ +#define I40E_GLPCI_ANA_ADD_ADDRESS_SHIFT 0 +#define I40E_GLPCI_ANA_ADD_ADDRESS_MASK I40E_MASK(0xFFFF, I40E_GLPCI_ANA_ADD_ADDRESS_SHIFT) +#define I40E_GLPCI_ANA_ADD_BYTE_EN_SHIFT 28 +#define I40E_GLPCI_ANA_ADD_BYTE_EN_MASK I40E_MASK(0xF, I40E_GLPCI_ANA_ADD_BYTE_EN_SHIFT) + +#define I40E_GLPCI_ANA_DATA 0x000BA004 /* Reset: POR */ +#define I40E_GLPCI_ANA_DATA_DATA_SHIFT 0 +#define I40E_GLPCI_ANA_DATA_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_ANA_DATA_DATA_SHIFT) + +#define I40E_GLPCI_LCBADD 0x0009C4C0 /* Reset: PCIR */ +#define I40E_GLPCI_LCBADD_ADDRESS_SHIFT 0 +#define I40E_GLPCI_LCBADD_ADDRESS_MASK I40E_MASK(0x3FFFF, I40E_GLPCI_LCBADD_ADDRESS_SHIFT) +#define I40E_GLPCI_LCBADD_BLOCK_ID_SHIFT 20 +#define I40E_GLPCI_LCBADD_BLOCK_ID_MASK I40E_MASK(0x7FF, I40E_GLPCI_LCBADD_BLOCK_ID_SHIFT) +#define I40E_GLPCI_LCBADD_LOCK_SHIFT 31 +#define I40E_GLPCI_LCBADD_LOCK_MASK I40E_MASK(0x1, I40E_GLPCI_LCBADD_LOCK_SHIFT) + +#define I40E_GLPCI_LCBDATA 0x0009C4C4 /* Reset: PCIR */ +#define I40E_GLPCI_LCBDATA_LCB_DATA_SHIFT 0 +#define I40E_GLPCI_LCBDATA_LCB_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_LCBDATA_LCB_DATA_SHIFT) + +#define I40E_GLPCI_PCITEST1 0x000BE488 /* Reset: PCIR */ +#define I40E_GLPCI_PCITEST1_IGNORE_RID_SHIFT 0 +#define I40E_GLPCI_PCITEST1_IGNORE_RID_MASK I40E_MASK(0x1, I40E_GLPCI_PCITEST1_IGNORE_RID_SHIFT) +#define I40E_GLPCI_PCITEST1_V_MSIX_EN_SHIFT 2 +#define I40E_GLPCI_PCITEST1_V_MSIX_EN_MASK I40E_MASK(0x1, I40E_GLPCI_PCITEST1_V_MSIX_EN_SHIFT) + +#define I40E_GLPCI_PCITEST2 0x000BE4BC /* Reset: PCIR */ +#define I40E_GLPCI_PCITEST2_IOV_TEST_MODE_SHIFT 0 +#define I40E_GLPCI_PCITEST2_IOV_TEST_MODE_MASK I40E_MASK(0x1, I40E_GLPCI_PCITEST2_IOV_TEST_MODE_SHIFT) +#define I40E_GLPCI_PCITEST2_TAG_ALLOC_SHIFT 1 +#define I40E_GLPCI_PCITEST2_TAG_ALLOC_MASK I40E_MASK(0x1, I40E_GLPCI_PCITEST2_TAG_ALLOC_SHIFT) + +#define I40E_GLTPH_CTRL 0x000BE480 /* Reset: PCIR */ +#define I40E_GLTPH_CTRL_DISABLE_READ_HINT_SHIFT 8 +#define I40E_GLTPH_CTRL_DISABLE_READ_HINT_MASK I40E_MASK(0x1, I40E_GLTPH_CTRL_DISABLE_READ_HINT_SHIFT) +#define I40E_GLTPH_CTRL_DESC_PH_SHIFT 9 +#define I40E_GLTPH_CTRL_DESC_PH_MASK I40E_MASK(0x3, I40E_GLTPH_CTRL_DESC_PH_SHIFT) +#define I40E_GLTPH_CTRL_DATA_PH_SHIFT 11 +#define I40E_GLTPH_CTRL_DATA_PH_MASK I40E_MASK(0x3, I40E_GLTPH_CTRL_DATA_PH_SHIFT) +#define I40E_GLTPH_CTRL_TPH_AUTOLEARN_SHIFT 13 +#define I40E_GLTPH_CTRL_TPH_AUTOLEARN_MASK I40E_MASK(0x1, I40E_GLTPH_CTRL_TPH_AUTOLEARN_SHIFT) + +#define I40E_PF_VT_PFALLOC_PCIE 0x000BE380 /* Reset: PCIR */ +#define I40E_PF_VT_PFALLOC_PCIE_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_PCIE_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_PCIE_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_PCIE_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_PCIE_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_PCIE_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_PCIE_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_PCIE_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_PCIE_VALID_SHIFT) + +/* PF - Power Management Registers */ + +#define I40E_GLPCI_PM_EN_STAT 0x000BE4E4 /* Reset: POR */ +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF0_SHIFT 0 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF0_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF0_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF1_SHIFT 1 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF1_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF1_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF2_SHIFT 2 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF2_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF2_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF3_SHIFT 3 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF3_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF3_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF4_SHIFT 4 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF4_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF4_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF5_SHIFT 5 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF5_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF5_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF6_SHIFT 6 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF6_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF6_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF7_SHIFT 7 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF7_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF7_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF8_SHIFT 8 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF8_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF8_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF9_SHIFT 9 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF9_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF9_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF10_SHIFT 10 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF10_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF10_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF11_SHIFT 11 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF11_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF11_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF12_SHIFT 12 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF12_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF12_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF13_SHIFT 13 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF13_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF13_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF14_SHIFT 14 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF14_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF14_SHIFT) +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF15_SHIFT 15 +#define I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF15_MASK I40E_MASK(0x1, I40E_GLPCI_PM_EN_STAT_PCIE_PME_EN_PF15_SHIFT) + +#define I40E_GLPM_DMAC_ENC 0x000881F0 /* Reset: CORER */ +#define I40E_GLPM_DMAC_ENC_DMACENTRY_SHIFT 0 +#define I40E_GLPM_DMAC_ENC_DMACENTRY_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPM_DMAC_ENC_DMACENTRY_SHIFT) + +#define I40E_GLPM_DMAC_EXC 0x000881FC /* Reset: CORER */ +#define I40E_GLPM_DMAC_EXC_DMACTIMEREXIT_SHIFT 0 +#define I40E_GLPM_DMAC_EXC_DMACTIMEREXIT_MASK I40E_MASK(0xFFFF, I40E_GLPM_DMAC_EXC_DMACTIMEREXIT_SHIFT) +#define I40E_GLPM_DMAC_EXC_DMAIMMEXIT_SHIFT 16 +#define I40E_GLPM_DMAC_EXC_DMAIMMEXIT_MASK I40E_MASK(0xFFFF, I40E_GLPM_DMAC_EXC_DMAIMMEXIT_SHIFT) + +#define I40E_GLPM_DMACR 0x000881F4 /* Reset: CORER */ +#define I40E_GLPM_DMACR_DMACWT_SHIFT 0 +#define I40E_GLPM_DMACR_DMACWT_MASK I40E_MASK(0xFFFF, I40E_GLPM_DMACR_DMACWT_SHIFT) +#define I40E_GLPM_DMACR_EXIT_DC_SHIFT 29 +#define I40E_GLPM_DMACR_EXIT_DC_MASK I40E_MASK(0x1, I40E_GLPM_DMACR_EXIT_DC_SHIFT) +#define I40E_GLPM_DMACR_LX_COALESCING_INDICATION_SHIFT 30 +#define I40E_GLPM_DMACR_LX_COALESCING_INDICATION_MASK I40E_MASK(0x1, I40E_GLPM_DMACR_LX_COALESCING_INDICATION_SHIFT) +#define I40E_GLPM_DMACR_DMAC_EN_SHIFT 31 +#define I40E_GLPM_DMACR_DMAC_EN_MASK I40E_MASK(0x1, I40E_GLPM_DMACR_DMAC_EN_SHIFT) + +#define I40E_GLPM_DMCTH 0x000AC7E4 /* Reset: CORER */ +#define I40E_GLPM_DMCTH_DMACRXT_SHIFT 0 +#define I40E_GLPM_DMCTH_DMACRXT_MASK I40E_MASK(0x3FF, I40E_GLPM_DMCTH_DMACRXT_SHIFT) + +#define I40E_GLPM_DMCTLX 0x000881F8 /* Reset: CORER */ +#define I40E_GLPM_DMCTLX_TTLX_SHIFT 0 +#define I40E_GLPM_DMCTLX_TTLX_MASK I40E_MASK(0xFFF, I40E_GLPM_DMCTLX_TTLX_SHIFT) + +#define I40E_GLPM_EEE_SU 0x001E4340 /* Reset: GLOBR */ +#define I40E_GLPM_EEE_SU_DTW_MIN_1000_BASE_T_SHIFT 0 +#define I40E_GLPM_EEE_SU_DTW_MIN_1000_BASE_T_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_DTW_MIN_1000_BASE_T_SHIFT) +#define I40E_GLPM_EEE_SU_DTW_MIN_100_BASE_TX_SHIFT 8 +#define I40E_GLPM_EEE_SU_DTW_MIN_100_BASE_TX_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_DTW_MIN_100_BASE_TX_SHIFT) + +#define I40E_GLPM_EEE_SU_EXT 0x001E4344 /* Reset: GLOBR */ +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_1000_BASE_KX_SHIFT 0 +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_1000_BASE_KX_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_EXT_DTW_MIN_1000_BASE_KX_SHIFT) +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KX4_SHIFT 8 +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KX4_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KX4_SHIFT) +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KR_SHIFT 16 +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KR_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_KR_SHIFT) +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_T_SHIFT 24 +#define I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_T_MASK I40E_MASK(0xFF, I40E_GLPM_EEE_SU_EXT_DTW_MIN_10GBASE_T_SHIFT) + +#define I40E_GLPM_LTRC 0x000BE500 /* Reset: PCIR */ +#define I40E_GLPM_LTRC_SLTRV_SHIFT 0 +#define I40E_GLPM_LTRC_SLTRV_MASK I40E_MASK(0x3FF, I40E_GLPM_LTRC_SLTRV_SHIFT) +#define I40E_GLPM_LTRC_SSCALE_SHIFT 10 +#define I40E_GLPM_LTRC_SSCALE_MASK I40E_MASK(0x7, I40E_GLPM_LTRC_SSCALE_SHIFT) +#define I40E_GLPM_LTRC_LTRS_REQUIREMENT_SHIFT 15 +#define I40E_GLPM_LTRC_LTRS_REQUIREMENT_MASK I40E_MASK(0x1, I40E_GLPM_LTRC_LTRS_REQUIREMENT_SHIFT) +#define I40E_GLPM_LTRC_NSLTRV_SHIFT 16 +#define I40E_GLPM_LTRC_NSLTRV_MASK I40E_MASK(0x3FF, I40E_GLPM_LTRC_NSLTRV_SHIFT) +#define I40E_GLPM_LTRC_NSSCALE_SHIFT 26 +#define I40E_GLPM_LTRC_NSSCALE_MASK I40E_MASK(0x7, I40E_GLPM_LTRC_NSSCALE_SHIFT) +#define I40E_GLPM_LTRC_LTR_SEND_SHIFT 30 +#define I40E_GLPM_LTRC_LTR_SEND_MASK I40E_MASK(0x1, I40E_GLPM_LTRC_LTR_SEND_SHIFT) +#define I40E_GLPM_LTRC_LTRNS_REQUIREMENT_SHIFT 31 +#define I40E_GLPM_LTRC_LTRNS_REQUIREMENT_MASK I40E_MASK(0x1, I40E_GLPM_LTRC_LTRNS_REQUIREMENT_SHIFT) + +#define I40E_PRTPM_EEEDBG 0x001E4420 /* Reset: GLOBR */ +#define I40E_PRTPM_EEEDBG_FORCE_TLPI_SHIFT 0 +#define I40E_PRTPM_EEEDBG_FORCE_TLPI_MASK I40E_MASK(0x1, I40E_PRTPM_EEEDBG_FORCE_TLPI_SHIFT) + +#define I40E_PRTPM_HPTC 0x000AC800 /* Reset: CORER */ +#define I40E_PRTPM_HPTC_HIGH_PRI_TC_SHIFT 0 +#define I40E_PRTPM_HPTC_HIGH_PRI_TC_MASK I40E_MASK(0xFF, I40E_PRTPM_HPTC_HIGH_PRI_TC_SHIFT) + +/* PF - Receive Packet Buffer Registers */ + +#define I40E_GLRPB_DHWS 0x000AC820 /* Reset: CORER */ +#define I40E_GLRPB_DHWS_DHW_TCN_SHIFT 0 +#define I40E_GLRPB_DHWS_DHW_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_DHWS_DHW_TCN_SHIFT) + +#define I40E_GLRPB_DLWS 0x000AC824 /* Reset: CORER */ +#define I40E_GLRPB_DLWS_DLW_TCN_SHIFT 0 +#define I40E_GLRPB_DLWS_DLW_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_DLWS_DLW_TCN_SHIFT) + +#define I40E_GLRPB_GFC 0x000AC82C /* Reset: CORER */ +#define I40E_GLRPB_GFC_GFC_SHIFT 0 +#define I40E_GLRPB_GFC_GFC_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_GFC_GFC_SHIFT) + +#define I40E_GLRPB_GPC 0x000AC838 /* Reset: CORER */ +#define I40E_GLRPB_GPC_GPC_SHIFT 0 +#define I40E_GLRPB_GPC_GPC_MASK I40E_MASK(0x3FFF, I40E_GLRPB_GPC_GPC_SHIFT) + +#define I40E_GLRPB_LTRTL 0x000AC83C /* Reset: CORER */ +#define I40E_GLRPB_LTRTL_LTRTL_SHIFT 0 +#define I40E_GLRPB_LTRTL_LTRTL_MASK I40E_MASK(0x3FF, I40E_GLRPB_LTRTL_LTRTL_SHIFT) + +#define I40E_GLRPB_LTRTV 0x000AC840 /* Reset: CORER */ +#define I40E_GLRPB_LTRTV_LTRTV_SHIFT 0 +#define I40E_GLRPB_LTRTV_LTRTV_MASK I40E_MASK(0x3FF, I40E_GLRPB_LTRTV_LTRTV_SHIFT) + +#define I40E_GLRPB_SHTS 0x000AC84C /* Reset: CORER */ +#define I40E_GLRPB_SHTS_SHT_TCN_SHIFT 0 +#define I40E_GLRPB_SHTS_SHT_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_SHTS_SHT_TCN_SHIFT) + +#define I40E_GLRPB_SHWS 0x000AC850 /* Reset: CORER */ +#define I40E_GLRPB_SHWS_SHW_SHIFT 0 +#define I40E_GLRPB_SHWS_SHW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_SHWS_SHW_SHIFT) + +#define I40E_GLRPB_SLTS 0x000AC854 /* Reset: CORER */ +#define I40E_GLRPB_SLTS_SLT_TCN_SHIFT 0 +#define I40E_GLRPB_SLTS_SLT_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_SLTS_SLT_TCN_SHIFT) + +#define I40E_GLRPB_SLWS 0x000AC858 /* Reset: CORER */ +#define I40E_GLRPB_SLWS_SLW_SHIFT 0 +#define I40E_GLRPB_SLWS_SLW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_SLWS_SLW_SHIFT) + +#define I40E_GLRPB_SPSS 0x000AC85C /* Reset: CORER */ +#define I40E_GLRPB_SPSS_SPS_SHIFT 0 +#define I40E_GLRPB_SPSS_SPS_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_SPSS_SPS_SHIFT) + +#define I40E_PRTRPB_DFC(_i) (0x000AC000 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DFC_MAX_INDEX 7 +#define I40E_PRTRPB_DFC_DFC_TCN_SHIFT 0 +#define I40E_PRTRPB_DFC_DFC_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DFC_DFC_TCN_SHIFT) + +#define I40E_PRTRPB_PFC 0x000AC420 /* Reset: CORER */ +#define I40E_PRTRPB_PFC_PFC_SHIFT 0 +#define I40E_PRTRPB_PFC_PFC_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_PFC_PFC_SHIFT) + +#define I40E_PRTRPB_RUP2TC 0x000AC440 /* Reset: CORER */ +#define I40E_PRTRPB_RUP2TC_UP0TC_SHIFT 0 +#define I40E_PRTRPB_RUP2TC_UP0TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP0TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP1TC_SHIFT 3 +#define I40E_PRTRPB_RUP2TC_UP1TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP1TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP2TC_SHIFT 6 +#define I40E_PRTRPB_RUP2TC_UP2TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP2TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP3TC_SHIFT 9 +#define I40E_PRTRPB_RUP2TC_UP3TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP3TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP4TC_SHIFT 12 +#define I40E_PRTRPB_RUP2TC_UP4TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP4TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP5TC_SHIFT 15 +#define I40E_PRTRPB_RUP2TC_UP5TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP5TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP6TC_SHIFT 18 +#define I40E_PRTRPB_RUP2TC_UP6TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP6TC_SHIFT) +#define I40E_PRTRPB_RUP2TC_UP7TC_SHIFT 21 +#define I40E_PRTRPB_RUP2TC_UP7TC_MASK I40E_MASK(0x7, I40E_PRTRPB_RUP2TC_UP7TC_SHIFT) + +#define I40E_PRTRPB_SFC 0x000AC460 /* Reset: CORER */ +#define I40E_PRTRPB_SFC_SFC_SHIFT 0 +#define I40E_PRTRPB_SFC_SFC_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SFC_SFC_SHIFT) + +#define I40E_PRTRPB_SOC(_i) (0x000AC6C0 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_SOC_MAX_INDEX 7 +#define I40E_PRTRPB_SOC_SOC_TCN_SHIFT 0 +#define I40E_PRTRPB_SOC_SOC_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SOC_SOC_TCN_SHIFT) + +#define I40E_PRTRPB_TC2PFC 0x000AC200 /* Reset: CORER */ +#define I40E_PRTRPB_TC2PFC_TC2PFC_SHIFT 0 +#define I40E_PRTRPB_TC2PFC_TC2PFC_MASK I40E_MASK(0xFF, I40E_PRTRPB_TC2PFC_TC2PFC_SHIFT) + +/* PF - Rx Filters Registers */ + +#define I40E_GL_PRS_FVBM(_i) (0x00269760 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GL_PRS_FVBM_MAX_INDEX 3 +#define I40E_GL_PRS_FVBM_FV_BYTE_INDX_SHIFT 0 +#define I40E_GL_PRS_FVBM_FV_BYTE_INDX_MASK I40E_MASK(0x7F, I40E_GL_PRS_FVBM_FV_BYTE_INDX_SHIFT) +#define I40E_GL_PRS_FVBM_RULE_BUS_INDX_SHIFT 8 +#define I40E_GL_PRS_FVBM_RULE_BUS_INDX_MASK I40E_MASK(0x3F, I40E_GL_PRS_FVBM_RULE_BUS_INDX_SHIFT) +#define I40E_GL_PRS_FVBM_MSK_ENA_SHIFT 31 +#define I40E_GL_PRS_FVBM_MSK_ENA_MASK I40E_MASK(0x1, I40E_GL_PRS_FVBM_MSK_ENA_SHIFT) + +#define I40E_GLCM_LAN_FCOEQCNT 0x0010C438 /* Reset: CORER */ +#define I40E_GLCM_LAN_FCOEQCNT_FCOE_DDP_CNT_SHIFT 10 +#define I40E_GLCM_LAN_FCOEQCNT_FCOE_DDP_CNT_MASK I40E_MASK(0x3FF, I40E_GLCM_LAN_FCOEQCNT_FCOE_DDP_CNT_SHIFT) + +#define I40E_GLCM_LAN_LANQCNT 0x0010C434 /* Reset: CORER */ +#define I40E_GLCM_LAN_LANQCNT_LANTX_CNT_SHIFT 0 +#define I40E_GLCM_LAN_LANQCNT_LANTX_CNT_MASK I40E_MASK(0x3FF, I40E_GLCM_LAN_LANQCNT_LANTX_CNT_SHIFT) +#define I40E_GLCM_LAN_LANQCNT_LANRX_CNT_SHIFT 10 +#define I40E_GLCM_LAN_LANQCNT_LANRX_CNT_MASK I40E_MASK(0x3FF, I40E_GLCM_LAN_LANQCNT_LANRX_CNT_SHIFT) + +#define I40E_GLFOC_CACHE_CTL 0x000AA000 /* Reset: CORER */ +#define I40E_GLFOC_CACHE_CTL_FD_ALLOCATION_SHIFT 0 +#define I40E_GLFOC_CACHE_CTL_FD_ALLOCATION_MASK I40E_MASK(0x3, I40E_GLFOC_CACHE_CTL_FD_ALLOCATION_SHIFT) +#define I40E_GLFOC_CACHE_CTL_SCALE_FACTOR_SHIFT 2 +#define I40E_GLFOC_CACHE_CTL_SCALE_FACTOR_MASK I40E_MASK(0x3, I40E_GLFOC_CACHE_CTL_SCALE_FACTOR_SHIFT) +#define I40E_GLFOC_CACHE_CTL_DBGMUX_EN_SHIFT 4 +#define I40E_GLFOC_CACHE_CTL_DBGMUX_EN_MASK I40E_MASK(0x1, I40E_GLFOC_CACHE_CTL_DBGMUX_EN_SHIFT) +#define I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_LO_SHIFT 8 +#define I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_LO_MASK I40E_MASK(0x1F, I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_LO_SHIFT) +#define I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_HI_SHIFT 16 +#define I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_HI_MASK I40E_MASK(0x1F, I40E_GLFOC_CACHE_CTL_DBGMUX_SEL_HI_SHIFT) + +#define I40E_GLFOC_FSTAT 0x000AA004 /* Reset: CORER */ +#define I40E_GLFOC_FSTAT_PE_CNT_SHIFT 0 +#define I40E_GLFOC_FSTAT_PE_CNT_MASK I40E_MASK(0x7FF, I40E_GLFOC_FSTAT_PE_CNT_SHIFT) +#define I40E_GLFOC_FSTAT_FC_CNT_SHIFT 16 +#define I40E_GLFOC_FSTAT_FC_CNT_MASK I40E_MASK(0x7FF, I40E_GLFOC_FSTAT_FC_CNT_SHIFT) + +#define I40E_GLQF_FC_INSET(_i, _j) (0x002695A0 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...3 */ /* Reset: CORER */ +#define I40E_GLQF_FC_INSET_MAX_INDEX 1 +#define I40E_GLQF_FC_INSET_INSET_SHIFT 0 +#define I40E_GLQF_FC_INSET_INSET_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_FC_INSET_INSET_SHIFT) + +#define I40E_GLQF_FC_MSK(_i, _j) (0x002690C0 + ((_i) * 4 + (_j) * 16)) /* _i=0...3, _j=0...3 */ /* Reset: CORER */ +#define I40E_GLQF_FC_MSK_MAX_INDEX 3 +#define I40E_GLQF_FC_MSK_MASK_SHIFT 0 +#define I40E_GLQF_FC_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_GLQF_FC_MSK_MASK_SHIFT) +#define I40E_GLQF_FC_MSK_OFFSET_SHIFT 16 +#define I40E_GLQF_FC_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_GLQF_FC_MSK_OFFSET_SHIFT) + +#define I40E_GLQF_FCTYPE(_i) (0x00269520 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLQF_FCTYPE_MAX_INDEX 3 +#define I40E_GLQF_FCTYPE_PCTYPE_INDEX_SHIFT 0 +#define I40E_GLQF_FCTYPE_PCTYPE_INDEX_MASK I40E_MASK(0x3F, I40E_GLQF_FCTYPE_PCTYPE_INDEX_SHIFT) +#define I40E_GLQF_FCTYPE_PCTYPE_ENA_SHIFT 7 +#define I40E_GLQF_FCTYPE_PCTYPE_ENA_MASK I40E_MASK(0x1, I40E_GLQF_FCTYPE_PCTYPE_ENA_SHIFT) + +#define I40E_GLQF_FD_MSK(_i, _j) (0x00267200 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_FD_MSK_MAX_INDEX 1 +#define I40E_GLQF_FD_MSK_MASK_SHIFT 0 +#define I40E_GLQF_FD_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_GLQF_FD_MSK_MASK_SHIFT) +#define I40E_GLQF_FD_MSK_OFFSET_SHIFT 16 +#define I40E_GLQF_FD_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_GLQF_FD_MSK_OFFSET_SHIFT) + +#define I40E_GLQF_FDCNT_1 0x00269BB4 /* Reset: CORER */ +#define I40E_GLQF_FDCNT_1_BUCKETCNT_SHIFT 0 +#define I40E_GLQF_FDCNT_1_BUCKETCNT_MASK I40E_MASK(0x3FFF, I40E_GLQF_FDCNT_1_BUCKETCNT_SHIFT) + +#define I40E_GLQF_FDCNT_2 0x00269BBC /* Reset: CORER */ +#define I40E_GLQF_FDCNT_2_HITSBCNT_SHIFT 0 +#define I40E_GLQF_FDCNT_2_HITSBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_FDCNT_2_HITSBCNT_SHIFT) + +#define I40E_GLQF_FDCNT_3 0x00269BC4 /* Reset: CORER */ +#define I40E_GLQF_FDCNT_3_HITLBCNT_SHIFT 0 +#define I40E_GLQF_FDCNT_3_HITLBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_FDCNT_3_HITLBCNT_SHIFT) + +#define I40E_GLQF_FDENA(_i) (0x002698A8 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GLQF_FDENA_MAX_INDEX 1 +#define I40E_GLQF_FDENA_FD_ENA_SHIFT 0 +#define I40E_GLQF_FDENA_FD_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_FDENA_FD_ENA_SHIFT) + +#define I40E_GLQF_HASH_INSET(_i, _j) (0x00267600 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_HASH_INSET_MAX_INDEX 1 +#define I40E_GLQF_HASH_INSET_INSET_SHIFT 0 +#define I40E_GLQF_HASH_INSET_INSET_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_HASH_INSET_INSET_SHIFT) + +#define I40E_GLQF_HASH_MSK(_i, _j) (0x00267A00 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_HASH_MSK_MAX_INDEX 1 +#define I40E_GLQF_HASH_MSK_MASK_SHIFT 0 +#define I40E_GLQF_HASH_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_GLQF_HASH_MSK_MASK_SHIFT) +#define I40E_GLQF_HASH_MSK_OFFSET_SHIFT 16 +#define I40E_GLQF_HASH_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_GLQF_HASH_MSK_OFFSET_SHIFT) + +#define I40E_GLQF_ORT(_i) (0x00268900 + ((_i) * 4)) /* _i=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_ORT_MAX_INDEX 63 +#define I40E_GLQF_ORT_PIT_INDX_SHIFT 0 +#define I40E_GLQF_ORT_PIT_INDX_MASK I40E_MASK(0x1F, I40E_GLQF_ORT_PIT_INDX_SHIFT) +#define I40E_GLQF_ORT_FIELD_CNT_SHIFT 5 +#define I40E_GLQF_ORT_FIELD_CNT_MASK I40E_MASK(0x3, I40E_GLQF_ORT_FIELD_CNT_SHIFT) +#define I40E_GLQF_ORT_FLX_PAYLOAD_SHIFT 7 +#define I40E_GLQF_ORT_FLX_PAYLOAD_MASK I40E_MASK(0x1, I40E_GLQF_ORT_FLX_PAYLOAD_SHIFT) + +#define I40E_GLQF_PE_INSET(_i, _j) (0x00269140 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLQF_PE_INSET_MAX_INDEX 1 +#define I40E_GLQF_PE_INSET_INSET_SHIFT 0 +#define I40E_GLQF_PE_INSET_INSET_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_PE_INSET_INSET_SHIFT) + +#define I40E_GLQF_PE_MSK(_i, _j) (0x002691C0 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLQF_PE_MSK_MAX_INDEX 1 +#define I40E_GLQF_PE_MSK_MASK_SHIFT 0 +#define I40E_GLQF_PE_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_GLQF_PE_MSK_MASK_SHIFT) +#define I40E_GLQF_PE_MSK_OFFSET_SHIFT 16 +#define I40E_GLQF_PE_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_GLQF_PE_MSK_OFFSET_SHIFT) + +#define I40E_GLQF_PECNT_0 0x00269FA4 /* Reset: CORER */ +#define I40E_GLQF_PECNT_0_PROG_CNT_SHIFT 0 +#define I40E_GLQF_PECNT_0_PROG_CNT_MASK I40E_MASK(0x1F, I40E_GLQF_PECNT_0_PROG_CNT_SHIFT) + +#define I40E_GLQF_PECNT_1 0x00269FAC /* Reset: CORER */ +#define I40E_GLQF_PECNT_1_ADD_OK_SHIFT 0 +#define I40E_GLQF_PECNT_1_ADD_OK_MASK I40E_MASK(0x1F, I40E_GLQF_PECNT_1_ADD_OK_SHIFT) +#define I40E_GLQF_PECNT_1_ADD_FAIL_SHIFT 8 +#define I40E_GLQF_PECNT_1_ADD_FAIL_MASK I40E_MASK(0x1F, I40E_GLQF_PECNT_1_ADD_FAIL_SHIFT) +#define I40E_GLQF_PECNT_1_REMOVE_OK_SHIFT 16 +#define I40E_GLQF_PECNT_1_REMOVE_OK_MASK I40E_MASK(0x1F, I40E_GLQF_PECNT_1_REMOVE_OK_SHIFT) +#define I40E_GLQF_PECNT_1_REMOVE_FAIL_SHIFT 24 +#define I40E_GLQF_PECNT_1_REMOVE_FAIL_MASK I40E_MASK(0x1F, I40E_GLQF_PECNT_1_REMOVE_FAIL_SHIFT) + +#define I40E_GLQF_PETYPE(_i) (0x00269560 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_GLQF_PETYPE_MAX_INDEX 7 +#define I40E_GLQF_PETYPE_PCTYPE_INDEX_SHIFT 0 +#define I40E_GLQF_PETYPE_PCTYPE_INDEX_MASK I40E_MASK(0x3F, I40E_GLQF_PETYPE_PCTYPE_INDEX_SHIFT) +#define I40E_GLQF_PETYPE_PCTYPE_ENA_SHIFT 7 +#define I40E_GLQF_PETYPE_PCTYPE_ENA_MASK I40E_MASK(0x1, I40E_GLQF_PETYPE_PCTYPE_ENA_SHIFT) + +#define I40E_GLQF_PIT(_i) (0x00268C80 + ((_i) * 4)) /* _i=0...23 */ /* Reset: CORER */ +#define I40E_GLQF_PIT_MAX_INDEX 23 +#define I40E_GLQF_PIT_SOURCE_OFF_SHIFT 0 +#define I40E_GLQF_PIT_SOURCE_OFF_MASK I40E_MASK(0x1F, I40E_GLQF_PIT_SOURCE_OFF_SHIFT) +#define I40E_GLQF_PIT_FSIZE_SHIFT 5 +#define I40E_GLQF_PIT_FSIZE_MASK I40E_MASK(0x1F, I40E_GLQF_PIT_FSIZE_SHIFT) +#define I40E_GLQF_PIT_DEST_OFF_SHIFT 10 +#define I40E_GLQF_PIT_DEST_OFF_MASK I40E_MASK(0x3F, I40E_GLQF_PIT_DEST_OFF_SHIFT) + +#define I40E_GLQF_PTYPE(_i, _j) (0x00268200 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_PTYPE_MAX_INDEX 1 +#define I40E_GLQF_PTYPE_PROT_LAYER_SHIFT 0 +#define I40E_GLQF_PTYPE_PROT_LAYER_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_PTYPE_PROT_LAYER_SHIFT) + +#define I40E_GLQF_PTYPE_ENA(_i, _j) (0x00268600 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_PTYPE_ENA_MAX_INDEX 1 +#define I40E_GLQF_PTYPE_ENA_PROT_LAYER_SHIFT 0 +#define I40E_GLQF_PTYPE_ENA_PROT_LAYER_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_PTYPE_ENA_PROT_LAYER_SHIFT) + +#define I40E_PFQF_CTL_0_PMAT 0x000C0700 /* Reset: CORER */ +#define I40E_PFQF_CTL_0_PMAT_PEHSIZE_SHIFT 0 +#define I40E_PFQF_CTL_0_PMAT_PEHSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PMAT_PEHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_PEDSIZE_SHIFT 5 +#define I40E_PFQF_CTL_0_PMAT_PEDSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PMAT_PEDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_PFFCHSIZE_SHIFT 10 +#define I40E_PFQF_CTL_0_PMAT_PFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_PMAT_PFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_PFFCDSIZE_SHIFT 14 +#define I40E_PFQF_CTL_0_PMAT_PFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_PMAT_PFFCDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_HASHLUTSIZE_SHIFT 16 +#define I40E_PFQF_CTL_0_PMAT_HASHLUTSIZE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_PMAT_HASHLUTSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_FD_ENA_SHIFT 17 +#define I40E_PFQF_CTL_0_PMAT_FD_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_PMAT_FD_ENA_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_ETYPE_ENA_SHIFT 18 +#define I40E_PFQF_CTL_0_PMAT_ETYPE_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_PMAT_ETYPE_ENA_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_MACVLAN_ENA_SHIFT 19 +#define I40E_PFQF_CTL_0_PMAT_MACVLAN_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_PMAT_MACVLAN_ENA_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_VFFCHSIZE_SHIFT 20 +#define I40E_PFQF_CTL_0_PMAT_VFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_PMAT_VFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PMAT_VFFCDSIZE_SHIFT 24 +#define I40E_PFQF_CTL_0_PMAT_VFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_PMAT_VFFCDSIZE_SHIFT) + +#define I40E_PFQF_CTL_0_RCU 0x00245C80 /* Reset: CORER */ +#define I40E_PFQF_CTL_0_RCU_PEHSIZE_SHIFT 0 +#define I40E_PFQF_CTL_0_RCU_PEHSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_RCU_PEHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_PEDSIZE_SHIFT 5 +#define I40E_PFQF_CTL_0_RCU_PEDSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_RCU_PEDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_PFFCHSIZE_SHIFT 10 +#define I40E_PFQF_CTL_0_RCU_PFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_RCU_PFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_PFFCDSIZE_SHIFT 14 +#define I40E_PFQF_CTL_0_RCU_PFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_RCU_PFFCDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_HASHLUTSIZE_SHIFT 16 +#define I40E_PFQF_CTL_0_RCU_HASHLUTSIZE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_RCU_HASHLUTSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_FD_ENA_SHIFT 17 +#define I40E_PFQF_CTL_0_RCU_FD_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_RCU_FD_ENA_SHIFT) +#define I40E_PFQF_CTL_0_RCU_ETYPE_ENA_SHIFT 18 +#define I40E_PFQF_CTL_0_RCU_ETYPE_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_RCU_ETYPE_ENA_SHIFT) +#define I40E_PFQF_CTL_0_RCU_MACVLAN_ENA_SHIFT 19 +#define I40E_PFQF_CTL_0_RCU_MACVLAN_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_RCU_MACVLAN_ENA_SHIFT) +#define I40E_PFQF_CTL_0_RCU_VFFCHSIZE_SHIFT 20 +#define I40E_PFQF_CTL_0_RCU_VFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_RCU_VFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_RCU_VFFCDSIZE_SHIFT 24 +#define I40E_PFQF_CTL_0_RCU_VFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_RCU_VFFCDSIZE_SHIFT) + +#define I40E_PFQF_DDPCNT 0x00246180 /* Reset: CORER */ +#define I40E_PFQF_DDPCNT_DDP_CNT_SHIFT 0 +#define I40E_PFQF_DDPCNT_DDP_CNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_DDPCNT_DDP_CNT_SHIFT) + +#define I40E_PFQF_FCCNT_0 0x00245E80 /* Reset: CORER */ +#define I40E_PFQF_FCCNT_0_BUCKETCNT_SHIFT 0 +#define I40E_PFQF_FCCNT_0_BUCKETCNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_FCCNT_0_BUCKETCNT_SHIFT) + +#define I40E_PFQF_FCCNT_1 0x00245F80 /* Reset: PFR */ +#define I40E_PFQF_FCCNT_1_HITSBCNT_SHIFT 0 +#define I40E_PFQF_FCCNT_1_HITSBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_FCCNT_1_HITSBCNT_SHIFT) + +#define I40E_PFQF_FCCNT_2 0x00246080 /* Reset: PFR */ +#define I40E_PFQF_FCCNT_2_HITLBCNT_SHIFT 0 +#define I40E_PFQF_FCCNT_2_HITLBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_FCCNT_2_HITLBCNT_SHIFT) + +#define I40E_PFQF_HREGION(_i) (0x00245400 + ((_i) * 128)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PFQF_HREGION_MAX_INDEX 7 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_0_SHIFT) +#define I40E_PFQF_HREGION_REGION_0_SHIFT 1 +#define I40E_PFQF_HREGION_REGION_0_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_0_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_1_SHIFT) +#define I40E_PFQF_HREGION_REGION_1_SHIFT 5 +#define I40E_PFQF_HREGION_REGION_1_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_1_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_2_SHIFT) +#define I40E_PFQF_HREGION_REGION_2_SHIFT 9 +#define I40E_PFQF_HREGION_REGION_2_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_2_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_3_SHIFT) +#define I40E_PFQF_HREGION_REGION_3_SHIFT 13 +#define I40E_PFQF_HREGION_REGION_3_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_3_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_4_SHIFT) +#define I40E_PFQF_HREGION_REGION_4_SHIFT 17 +#define I40E_PFQF_HREGION_REGION_4_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_4_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_5_SHIFT) +#define I40E_PFQF_HREGION_REGION_5_SHIFT 21 +#define I40E_PFQF_HREGION_REGION_5_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_5_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_6_SHIFT) +#define I40E_PFQF_HREGION_REGION_6_SHIFT 25 +#define I40E_PFQF_HREGION_REGION_6_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_6_SHIFT) +#define I40E_PFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28 +#define I40E_PFQF_HREGION_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_PFQF_HREGION_OVERRIDE_ENA_7_SHIFT) +#define I40E_PFQF_HREGION_REGION_7_SHIFT 29 +#define I40E_PFQF_HREGION_REGION_7_MASK I40E_MASK(0x7, I40E_PFQF_HREGION_REGION_7_SHIFT) + +#define I40E_PFQF_PECNT_0 0x00246480 /* Reset: CORER */ +#define I40E_PFQF_PECNT_0_BUCKETCNT_SHIFT 0 +#define I40E_PFQF_PECNT_0_BUCKETCNT_MASK I40E_MASK(0x7FFFF, I40E_PFQF_PECNT_0_BUCKETCNT_SHIFT) + +#define I40E_PFQF_PECNT_1 0x00246580 /* Reset: PFR */ +#define I40E_PFQF_PECNT_1_HITSBCNT_SHIFT 0 +#define I40E_PFQF_PECNT_1_HITSBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_PECNT_1_HITSBCNT_SHIFT) + +#define I40E_PFQF_PECNT_2 0x00246680 /* Reset: PFR */ +#define I40E_PFQF_PECNT_2_HITLBCNT_SHIFT 0 +#define I40E_PFQF_PECNT_2_HITLBCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_PECNT_2_HITLBCNT_SHIFT) + +#define I40E_PFQF_PECNT_CNTX 0x0026CA80 /* Reset: CORER */ +#define I40E_PFQF_PECNT_CNTX_FLTCNT_SHIFT 0 +#define I40E_PFQF_PECNT_CNTX_FLTCNT_MASK I40E_MASK(0x7FFFF, I40E_PFQF_PECNT_CNTX_FLTCNT_SHIFT) + +#define I40E_PRTQF_FD_INSET(_i, _j) (0x00250000 + ((_i) * 64 + (_j) * 32)) /* _i=0...63, _j=0...1 */ /* Reset: CORER */ +#define I40E_PRTQF_FD_INSET_MAX_INDEX 63 +#define I40E_PRTQF_FD_INSET_INSET_SHIFT 0 +#define I40E_PRTQF_FD_INSET_INSET_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTQF_FD_INSET_INSET_SHIFT) + +#define I40E_VPQF_CTL_RCU(_VF) (0x00231C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPQF_CTL_RCU_MAX_INDEX 127 +#define I40E_VPQF_CTL_RCU_PEHSIZE_SHIFT 0 +#define I40E_VPQF_CTL_RCU_PEHSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_RCU_PEHSIZE_SHIFT) +#define I40E_VPQF_CTL_RCU_PEDSIZE_SHIFT 5 +#define I40E_VPQF_CTL_RCU_PEDSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_RCU_PEDSIZE_SHIFT) +#define I40E_VPQF_CTL_RCU_FCHSIZE_SHIFT 10 +#define I40E_VPQF_CTL_RCU_FCHSIZE_MASK I40E_MASK(0xF, I40E_VPQF_CTL_RCU_FCHSIZE_SHIFT) +#define I40E_VPQF_CTL_RCU_FCDSIZE_SHIFT 14 +#define I40E_VPQF_CTL_RCU_FCDSIZE_MASK I40E_MASK(0x3, I40E_VPQF_CTL_RCU_FCDSIZE_SHIFT) + +#define I40E_VPQF_DDPCNT1(_VF) (0x00231400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPQF_DDPCNT1_MAX_INDEX 127 +#define I40E_VPQF_DDPCNT1_DDP_CNT_SHIFT 0 +#define I40E_VPQF_DDPCNT1_DDP_CNT_MASK I40E_MASK(0x1FFF, I40E_VPQF_DDPCNT1_DDP_CNT_SHIFT) + +#define I40E_VPQF_FCCNT_0(_VF) (0x0026A400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPQF_FCCNT_0_MAX_INDEX 127 +#define I40E_VPQF_FCCNT_0_BUCKETCNT_SHIFT 0 +#define I40E_VPQF_FCCNT_0_BUCKETCNT_MASK I40E_MASK(0x1FFF, I40E_VPQF_FCCNT_0_BUCKETCNT_SHIFT) + +#define I40E_VPQF_PECNT_0(_VF) (0x0026B400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPQF_PECNT_0_MAX_INDEX 127 +#define I40E_VPQF_PECNT_0_BUCKETCNT_SHIFT 0 +#define I40E_VPQF_PECNT_0_BUCKETCNT_MASK I40E_MASK(0x7FFFF, I40E_VPQF_PECNT_0_BUCKETCNT_SHIFT) + +#define I40E_VPQF_PECNT_1(_VF) (0x0026BC00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPQF_PECNT_1_MAX_INDEX 127 +#define I40E_VPQF_PECNT_1_FLTCNT_SHIFT 0 +#define I40E_VPQF_PECNT_1_FLTCNT_MASK I40E_MASK(0x7FFFF, I40E_VPQF_PECNT_1_FLTCNT_SHIFT) + +/* PF - Statistics Registers */ + +#define I40E_GLPRT_AORCH(_i) (0x00300A44 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_AORCH_MAX_INDEX 3 +#define I40E_GLPRT_AORCH_AORCH_SHIFT 0 +#define I40E_GLPRT_AORCH_AORCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_AORCH_AORCH_SHIFT) + +#define I40E_GLPRT_AORCL(_i) (0x00300A40 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_AORCL_MAX_INDEX 3 +#define I40E_GLPRT_AORCL_VGORC_SHIFT 0 +#define I40E_GLPRT_AORCL_VGORC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_AORCL_VGORC_SHIFT) + +#define I40E_GLPRT_ERRBC(_i) (0x003000C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_ERRBC_MAX_INDEX 3 +#define I40E_GLPRT_ERRBC_ERRBC_SHIFT 0 +#define I40E_GLPRT_ERRBC_ERRBC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_ERRBC_ERRBC_SHIFT) + +#define I40E_GLPRT_MSPDC(_i) (0x00300060 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MSPDC_MAX_INDEX 3 +#define I40E_GLPRT_MSPDC_MSPDC_SHIFT 0 +#define I40E_GLPRT_MSPDC_MSPDC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MSPDC_MSPDC_SHIFT) + +#define I40E_GLPRT_STDC(_i) (0x00300640 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_STDC_MAX_INDEX 3 +#define I40E_GLPRT_STDC_STDC_SHIFT 0 +#define I40E_GLPRT_STDC_STDC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_STDC_STDC_SHIFT) + +/* PF - Switch Registers */ + +#define I40E_EMP_MTG_FLU_ICH 0x00269BE4 /* Reset: CORER */ +#define I40E_EMP_MTG_FLU_ICH_PROTOCOL_ID_SHIFT 0 +#define I40E_EMP_MTG_FLU_ICH_PROTOCOL_ID_MASK I40E_MASK(0x3F, I40E_EMP_MTG_FLU_ICH_PROTOCOL_ID_SHIFT) +#define I40E_EMP_MTG_FLU_ICH_IGNORE_PROTOCOL_SHIFT 6 +#define I40E_EMP_MTG_FLU_ICH_IGNORE_PROTOCOL_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICH_IGNORE_PROTOCOL_SHIFT) +#define I40E_EMP_MTG_FLU_ICH_USE_MAN_SHIFT 7 +#define I40E_EMP_MTG_FLU_ICH_USE_MAN_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICH_USE_MAN_SHIFT) + +#define I40E_EMP_MTG_FLU_ICL 0x00269BDC /* Reset: CORER */ +#define I40E_EMP_MTG_FLU_ICL_W0_OFFSET_SHIFT 0 +#define I40E_EMP_MTG_FLU_ICL_W0_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_MTG_FLU_ICL_W0_OFFSET_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_W0_STATUS_SHIFT 6 +#define I40E_EMP_MTG_FLU_ICL_W0_STATUS_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_W0_STATUS_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_W1_OFFSET_SHIFT 8 +#define I40E_EMP_MTG_FLU_ICL_W1_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_MTG_FLU_ICL_W1_OFFSET_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_W1_STATUS_SHIFT 14 +#define I40E_EMP_MTG_FLU_ICL_W1_STATUS_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_W1_STATUS_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_W2_OFFSET_SHIFT 16 +#define I40E_EMP_MTG_FLU_ICL_W2_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_MTG_FLU_ICL_W2_OFFSET_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_W2_STATUS_SHIFT 22 +#define I40E_EMP_MTG_FLU_ICL_W2_STATUS_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_W2_STATUS_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_ETYPE_ENABLE_SHIFT 28 +#define I40E_EMP_MTG_FLU_ICL_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_ETYPE_ENABLE_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_IGNORE_PHASE_SHIFT 29 +#define I40E_EMP_MTG_FLU_ICL_IGNORE_PHASE_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_IGNORE_PHASE_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_EGRESS_SHIFT 30 +#define I40E_EMP_MTG_FLU_ICL_EGRESS_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_EGRESS_SHIFT) +#define I40E_EMP_MTG_FLU_ICL_PORT_ENABLE_SHIFT 31 +#define I40E_EMP_MTG_FLU_ICL_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_MTG_FLU_ICL_PORT_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CCTRL 0x00269770 /* Reset: POR */ +#define I40E_EMP_SWT_CCTRL_LLVSI_SHIFT 10 +#define I40E_EMP_SWT_CCTRL_LLVSI_MASK I40E_MASK(0x3FF, I40E_EMP_SWT_CCTRL_LLVSI_SHIFT) +#define I40E_EMP_SWT_CCTRL_PROXYVSI_SHIFT 20 +#define I40E_EMP_SWT_CCTRL_PROXYVSI_MASK I40E_MASK(0x3FF, I40E_EMP_SWT_CCTRL_PROXYVSI_SHIFT) + +#define I40E_EMP_SWT_CGEN 0x0006D000 /* Reset: POR */ +#define I40E_EMP_SWT_CGEN_GLEN_SHIFT 0 +#define I40E_EMP_SWT_CGEN_GLEN_MASK I40E_MASK(0x1, I40E_EMP_SWT_CGEN_GLEN_SHIFT) + +#define I40E_EMP_SWT_CLLE(_i) (0x00269790 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_EMP_SWT_CLLE_MAX_INDEX 3 +#define I40E_EMP_SWT_CLLE_TAG_SHIFT 0 +#define I40E_EMP_SWT_CLLE_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CLLE_TAG_SHIFT) +#define I40E_EMP_SWT_CLLE_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CLLE_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CLLE_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CLLE_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CLLE_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CLLE_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CLLE_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CLLE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CLLE_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CMASK 0x0006D180 /* Reset: POR */ +#define I40E_EMP_SWT_CMASK_UNICASTTAGMASK_SHIFT 0 +#define I40E_EMP_SWT_CMASK_UNICASTTAGMASK_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CMASK_UNICASTTAGMASK_SHIFT) +#define I40E_EMP_SWT_CMASK_MULTICASTTAGMASK_SHIFT 16 +#define I40E_EMP_SWT_CMASK_MULTICASTTAGMASK_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CMASK_MULTICASTTAGMASK_SHIFT) + +#define I40E_EMP_SWT_CMTTD(_i) (0x0006E000 + ((_i) * 4)) /* _i=0...511 */ /* Reset: POR */ +#define I40E_EMP_SWT_CMTTD_MAX_INDEX 511 +#define I40E_EMP_SWT_CMTTD_PFLIST_SHIFT 0 +#define I40E_EMP_SWT_CMTTD_PFLIST_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CMTTD_PFLIST_SHIFT) + +#define I40E_EMP_SWT_CMTTL(_i) (0x0006D800 + ((_i) * 4)) /* _i=0...511 */ /* Reset: POR */ +#define I40E_EMP_SWT_CMTTL_MAX_INDEX 511 +#define I40E_EMP_SWT_CMTTL_MTAG_SHIFT 0 +#define I40E_EMP_SWT_CMTTL_MTAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CMTTL_MTAG_SHIFT) +#define I40E_EMP_SWT_CMTTL_PORT_SHIFT 16 +#define I40E_EMP_SWT_CMTTL_PORT_MASK I40E_MASK(0x3, I40E_EMP_SWT_CMTTL_PORT_SHIFT) +#define I40E_EMP_SWT_CMTTL_ENABLE_SHIFT 18 +#define I40E_EMP_SWT_CMTTL_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CMTTL_ENABLE_SHIFT) + +#define I40E_EMP_SWT_COFFSET 0x0006D200 /* Reset: POR */ +#define I40E_EMP_SWT_COFFSET_UNICASTTAGOFFSET_SHIFT 0 +#define I40E_EMP_SWT_COFFSET_UNICASTTAGOFFSET_MASK I40E_MASK(0x1F, I40E_EMP_SWT_COFFSET_UNICASTTAGOFFSET_SHIFT) +#define I40E_EMP_SWT_COFFSET_RESERVED_2_SHIFT 5 +#define I40E_EMP_SWT_COFFSET_RESERVED_2_MASK I40E_MASK(0x7, I40E_EMP_SWT_COFFSET_RESERVED_2_SHIFT) +#define I40E_EMP_SWT_COFFSET_MULTICASTTAGOFFSET_SHIFT 8 +#define I40E_EMP_SWT_COFFSET_MULTICASTTAGOFFSET_MASK I40E_MASK(0x1F, I40E_EMP_SWT_COFFSET_MULTICASTTAGOFFSET_SHIFT) + +#define I40E_EMP_SWT_CPFE(_i) (0x001C09E0 + ((_i) * 4)) /* _i=0...15 */ /* Reset: POR */ +#define I40E_EMP_SWT_CPFE_MAX_INDEX 15 +#define I40E_EMP_SWT_CPFE_TAG_SHIFT 0 +#define I40E_EMP_SWT_CPFE_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CPFE_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CPFE_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CPFE_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CPFE_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CPFE_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CPFE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CPFE_RCU(_i) (0x00269040 + ((_i) * 4)) /* _i=0...15 */ /* Reset: POR */ +#define I40E_EMP_SWT_CPFE_RCU_MAX_INDEX 15 +#define I40E_EMP_SWT_CPFE_RCU_TAG_SHIFT 0 +#define I40E_EMP_SWT_CPFE_RCU_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CPFE_RCU_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_RCU_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CPFE_RCU_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_RCU_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_RCU_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CPFE_RCU_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CPFE_RCU_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CPFE_RCU_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CPFE_RCU_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_RCU_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CPFE_WUC(_i) (0x0006D080 + ((_i) * 4)) /* _i=0...15 */ /* Reset: POR */ +#define I40E_EMP_SWT_CPFE_WUC_MAX_INDEX 15 +#define I40E_EMP_SWT_CPFE_WUC_TAG_SHIFT 0 +#define I40E_EMP_SWT_CPFE_WUC_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CPFE_WUC_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_WUC_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CPFE_WUC_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_WUC_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CPFE_WUC_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CPFE_WUC_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CPFE_WUC_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CPFE_WUC_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CPFE_WUC_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPFE_WUC_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CPTE(_i) (0x002697B0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_EMP_SWT_CPTE_MAX_INDEX 3 +#define I40E_EMP_SWT_CPTE_TAG_SHIFT 0 +#define I40E_EMP_SWT_CPTE_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CPTE_TAG_SHIFT) +#define I40E_EMP_SWT_CPTE_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CPTE_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPTE_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CPTE_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CPTE_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CPTE_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CPTE_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CPTE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPTE_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CPTE2(_i) (0x002697D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_EMP_SWT_CPTE2_MAX_INDEX 3 +#define I40E_EMP_SWT_CPTE2_TAG_SHIFT 0 +#define I40E_EMP_SWT_CPTE2_TAG_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CPTE2_TAG_SHIFT) +#define I40E_EMP_SWT_CPTE2_IGNORE_TAG_SHIFT 16 +#define I40E_EMP_SWT_CPTE2_IGNORE_TAG_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPTE2_IGNORE_TAG_SHIFT) +#define I40E_EMP_SWT_CPTE2_PORT_NUMBER_SHIFT 17 +#define I40E_EMP_SWT_CPTE2_PORT_NUMBER_MASK I40E_MASK(0x3, I40E_EMP_SWT_CPTE2_PORT_NUMBER_SHIFT) +#define I40E_EMP_SWT_CPTE2_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_CPTE2_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_CPTE2_ENABLE_SHIFT) + +#define I40E_EMP_SWT_CTAG 0x00269B64 /* Reset: POR */ +#define I40E_EMP_SWT_CTAG_TAG_INDEX_SHIFT 0 +#define I40E_EMP_SWT_CTAG_TAG_INDEX_MASK I40E_MASK(0x3F, I40E_EMP_SWT_CTAG_TAG_INDEX_SHIFT) +#define I40E_EMP_SWT_CTAG_TAG_MASK_SHIFT 10 +#define I40E_EMP_SWT_CTAG_TAG_MASK_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_CTAG_TAG_MASK_SHIFT) + +#define I40E_EMP_SWT_CUPD 0x0006D100 /* Reset: POR */ +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT0_PF_SHIFT 0 +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT0_PF_MASK I40E_MASK(0xF, I40E_EMP_SWT_CUPD_UNTAGGED_PORT0_PF_SHIFT) +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT1_PF_SHIFT 4 +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT1_PF_MASK I40E_MASK(0xF, I40E_EMP_SWT_CUPD_UNTAGGED_PORT1_PF_SHIFT) +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT2_PF_SHIFT 8 +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT2_PF_MASK I40E_MASK(0xF, I40E_EMP_SWT_CUPD_UNTAGGED_PORT2_PF_SHIFT) +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT3_PF_SHIFT 12 +#define I40E_EMP_SWT_CUPD_UNTAGGED_PORT3_PF_MASK I40E_MASK(0xF, I40E_EMP_SWT_CUPD_UNTAGGED_PORT3_PF_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT0_SHIFT 26 +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT0_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT0_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT1_SHIFT 27 +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT1_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT1_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT2_SHIFT 28 +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT2_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT2_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT3_SHIFT 29 +#define I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT3_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNTAGGEDPORT3_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDUCTST_SHIFT 30 +#define I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDUCTST_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDUCTST_SHIFT) +#define I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDMCTST_SHIFT 31 +#define I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDMCTST_MASK I40E_MASK(0x1, I40E_EMP_SWT_CUPD_ACCEPTUNMATCHEDMCTST_SHIFT) + +#define I40E_EMP_SWT_ETHMATCH 0x00269B6C /* Reset: POR */ +#define I40E_EMP_SWT_ETHMATCH_ETHMATCH_SHIFT 0 +#define I40E_EMP_SWT_ETHMATCH_ETHMATCH_MASK I40E_MASK(0xFFFF, I40E_EMP_SWT_ETHMATCH_ETHMATCH_SHIFT) + +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0(_i) (0x002695E0 + ((_i) * 4)) /* _i=0...4 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_MAX_INDEX 4 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_PROTOCOL_ID_SHIFT 0 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_PROTOCOL_ID_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICH_PHASE0_PROTOCOL_ID_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_IGNORE_PROTOCOL_SHIFT 6 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_IGNORE_PROTOCOL_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICH_PHASE0_IGNORE_PROTOCOL_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_USE_MAN_SHIFT 7 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE0_USE_MAN_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICH_PHASE0_USE_MAN_SHIFT) + +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1(_i) (0x00269660 + ((_i) * 4)) /* _i=0...4 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_MAX_INDEX 4 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_PROTOCOL_ID_SHIFT 0 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_PROTOCOL_ID_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICH_PHASE1_PROTOCOL_ID_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_IGNORE_PROTOCOL_SHIFT 6 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_IGNORE_PROTOCOL_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICH_PHASE1_IGNORE_PROTOCOL_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_USE_MAN_SHIFT 7 +#define I40E_EMP_SWT_FLU_L1_ICH_PHASE1_USE_MAN_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICH_PHASE1_USE_MAN_SHIFT) + +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0(_i) (0x00269620 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_MAX_INDEX 6 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_OFFSET_SHIFT 0 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_STATUS_SHIFT 6 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W0_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_OFFSET_SHIFT 8 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_STATUS_SHIFT 14 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W1_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_OFFSET_SHIFT 16 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_STATUS_SHIFT 22 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_W2_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_ETYPE_ENABLE_SHIFT 28 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_ETYPE_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_IGNORE_PHASE_SHIFT 29 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_IGNORE_PHASE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_IGNORE_PHASE_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_EGRESS_SHIFT 30 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_EGRESS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_EGRESS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_PORT_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE0_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE0_PORT_ENABLE_SHIFT) + +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1(_i) (0x002696A0 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_MAX_INDEX 6 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_OFFSET_SHIFT 0 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_STATUS_SHIFT 6 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W0_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_OFFSET_SHIFT 8 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_STATUS_SHIFT 14 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W1_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_OFFSET_SHIFT 16 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_OFFSET_MASK I40E_MASK(0x3F, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_OFFSET_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_STATUS_SHIFT 22 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_STATUS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_W2_STATUS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_ETYPE_ENABLE_SHIFT 28 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_ETYPE_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_IGNORE_PHASE_SHIFT 29 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_IGNORE_PHASE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_IGNORE_PHASE_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_EGRESS_SHIFT 30 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_EGRESS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_EGRESS_SHIFT) +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_PORT_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_FLU_L1_ICL_PHASE1_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L1_ICL_PHASE1_PORT_ENABLE_SHIFT) + +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0(_i) (0x002696E0 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_MAX_INDEX 7 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_L1_OBJECT_TYPE_SHIFT 0 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_ENABLE_SHIFT 4 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD0_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_L1_OBJECT_TYPE_SHIFT 5 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_ENABLE_SHIFT 9 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD1_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_L1_OBJECT_TYPE_SHIFT 10 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_ENABLE_SHIFT 14 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_FIELD2_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_ETYPE_ENABLE_SHIFT 18 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_ETYPE_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_IGNORE_PHASE_SHIFT 29 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_IGNORE_PHASE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_IGNORE_PHASE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_EGRESS_INGRESS_SHIFT 30 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_EGRESS_INGRESS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_EGRESS_INGRESS_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_PORT_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE0_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE0_PORT_ENABLE_SHIFT) + +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1(_i) (0x00269720 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_MAX_INDEX 7 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_L1_OBJECT_TYPE_SHIFT 0 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_ENABLE_SHIFT 4 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD0_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_L1_OBJECT_TYPE_SHIFT 5 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_ENABLE_SHIFT 9 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD1_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_L1_OBJECT_TYPE_SHIFT 10 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_L1_OBJECT_TYPE_MASK I40E_MASK(0xF, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_L1_OBJECT_TYPE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_ENABLE_SHIFT 14 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_FIELD2_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_ETYPE_ENABLE_SHIFT 18 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_ETYPE_ENABLE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_IGNORE_PHASE_SHIFT 29 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_IGNORE_PHASE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_IGNORE_PHASE_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_EGRESS_INGRESS_SHIFT 30 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_EGRESS_INGRESS_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_EGRESS_INGRESS_SHIFT) +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_PORT_ENABLE_SHIFT 31 +#define I40E_EMP_SWT_FLU_L2_IC_PHASE1_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_FLU_L2_IC_PHASE1_PORT_ENABLE_SHIFT) + +#define I40E_EMP_SWT_LOCMD(_i) (0x00269460 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_EMP_SWT_LOCMD_MAX_INDEX 7 +#define I40E_EMP_SWT_LOCMD_COMMAND_SHIFT 0 +#define I40E_EMP_SWT_LOCMD_COMMAND_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_SWT_LOCMD_COMMAND_SHIFT) + +#define I40E_EMP_SWT_LOFV(_i) (0x00268D80 + ((_i) * 4)) /* _i=0...31 */ /* Reset: CORER */ +#define I40E_EMP_SWT_LOFV_MAX_INDEX 31 +#define I40E_EMP_SWT_LOFV_FIELDVECTOR_SHIFT 0 +#define I40E_EMP_SWT_LOFV_FIELDVECTOR_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_SWT_LOFV_FIELDVECTOR_SHIFT) + +#define I40E_EMP_SWT_MIREGVSI(_i, _j) (0x00263000 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...383 */ /* Reset: CORER */ +#define I40E_EMP_SWT_MIREGVSI_MAX_INDEX 1 +#define I40E_EMP_SWT_MIREGVSI_ENABLEDRULES_SHIFT 0 +#define I40E_EMP_SWT_MIREGVSI_ENABLEDRULES_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_SWT_MIREGVSI_ENABLEDRULES_SHIFT) + +#define I40E_EMP_SWT_MIRIGVSI(_i, _j) (0x00265000 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...383 */ /* Reset: CORER */ +#define I40E_EMP_SWT_MIRIGVSI_MAX_INDEX 1 +#define I40E_EMP_SWT_MIRIGVSI_ENABLEDRULES_SHIFT 0 +#define I40E_EMP_SWT_MIRIGVSI_ENABLEDRULES_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_SWT_MIRIGVSI_ENABLEDRULES_SHIFT) + +#define I40E_EMP_SWT_MIRTARVSI(_i) (0x00268B00 + ((_i) * 4)) /* _i=0...63 */ /* Reset: CORER */ +#define I40E_EMP_SWT_MIRTARVSI_MAX_INDEX 63 +#define I40E_EMP_SWT_MIRTARVSI_TARGETVSI_SHIFT 0 +#define I40E_EMP_SWT_MIRTARVSI_TARGETVSI_MASK I40E_MASK(0x1FF, I40E_EMP_SWT_MIRTARVSI_TARGETVSI_SHIFT) +#define I40E_EMP_SWT_MIRTARVSI_VFVMNUMBER_SHIFT 9 +#define I40E_EMP_SWT_MIRTARVSI_VFVMNUMBER_MASK I40E_MASK(0x3FF, I40E_EMP_SWT_MIRTARVSI_VFVMNUMBER_SHIFT) +#define I40E_EMP_SWT_MIRTARVSI_PFNUMBER_SHIFT 19 +#define I40E_EMP_SWT_MIRTARVSI_PFNUMBER_MASK I40E_MASK(0xF, I40E_EMP_SWT_MIRTARVSI_PFNUMBER_SHIFT) +#define I40E_EMP_SWT_MIRTARVSI_FUNCTIONTYPE_SHIFT 23 +#define I40E_EMP_SWT_MIRTARVSI_FUNCTIONTYPE_MASK I40E_MASK(0x3, I40E_EMP_SWT_MIRTARVSI_FUNCTIONTYPE_SHIFT) +#define I40E_EMP_SWT_MIRTARVSI_RULEENABLE_SHIFT 31 +#define I40E_EMP_SWT_MIRTARVSI_RULEENABLE_MASK I40E_MASK(0x1, I40E_EMP_SWT_MIRTARVSI_RULEENABLE_SHIFT) + +#define I40E_EMP_SWT_STS(_i) (0x002692C0 + ((_i) * 4)) /* _i=0...9 */ /* Reset: CORER */ +#define I40E_EMP_SWT_STS_MAX_INDEX 9 +#define I40E_EMP_SWT_STS_EMP_SWT_STS_SHIFT 0 +#define I40E_EMP_SWT_STS_EMP_SWT_STS_MASK I40E_MASK(0xFFFFFFFF, I40E_EMP_SWT_STS_EMP_SWT_STS_SHIFT) + +#define I40E_GL_MTG_FLU_MSK_L 0x00269F44 /* Reset: CORER */ +#define I40E_GL_MTG_FLU_MSK_L_MASK_LOW_SHIFT 0 +#define I40E_GL_MTG_FLU_MSK_L_MASK_LOW_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_MTG_FLU_MSK_L_MASK_LOW_SHIFT) + +#define I40E_GL_PRE_FLU_MSK_PH0_H(_i) (0x00269EA0 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_GL_PRE_FLU_MSK_PH0_H_MAX_INDEX 6 +#define I40E_GL_PRE_FLU_MSK_PH0_H_MASK_HIGH_SHIFT 0 +#define I40E_GL_PRE_FLU_MSK_PH0_H_MASK_HIGH_MASK I40E_MASK(0xFFFF, I40E_GL_PRE_FLU_MSK_PH0_H_MASK_HIGH_SHIFT) + +#define I40E_GL_PRE_FLU_MSK_PH0_L(_i) (0x00269E60 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_GL_PRE_FLU_MSK_PH0_L_MAX_INDEX 6 +#define I40E_GL_PRE_FLU_MSK_PH0_L_MASK_LOW_SHIFT 0 +#define I40E_GL_PRE_FLU_MSK_PH0_L_MASK_LOW_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_PRE_FLU_MSK_PH0_L_MASK_LOW_SHIFT) + +#define I40E_GL_PRE_FLU_MSK_PH1_H(_i) (0x00269F20 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_GL_PRE_FLU_MSK_PH1_H_MAX_INDEX 6 +#define I40E_GL_PRE_FLU_MSK_PH1_H_MASK_HIGH_SHIFT 0 +#define I40E_GL_PRE_FLU_MSK_PH1_H_MASK_HIGH_MASK I40E_MASK(0xFFFF, I40E_GL_PRE_FLU_MSK_PH1_H_MASK_HIGH_SHIFT) + +#define I40E_GL_PRE_FLU_MSK_PH1_L(_i) (0x00269EE0 + ((_i) * 4)) /* _i=0...6 */ /* Reset: CORER */ +#define I40E_GL_PRE_FLU_MSK_PH1_L_MAX_INDEX 6 +#define I40E_GL_PRE_FLU_MSK_PH1_L_MASK_LOW_SHIFT 0 +#define I40E_GL_PRE_FLU_MSK_PH1_L_MASK_LOW_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_PRE_FLU_MSK_PH1_L_MASK_LOW_SHIFT) + +#define I40E_GL_PRE_GEN_CFG 0x002699A4 /* Reset: CORER */ +#define I40E_GL_PRE_GEN_CFG_FILTER_ENABLE_SHIFT 0 +#define I40E_GL_PRE_GEN_CFG_FILTER_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PRE_GEN_CFG_FILTER_ENABLE_SHIFT) +#define I40E_GL_PRE_GEN_CFG_HASH_MODE_SHIFT 6 +#define I40E_GL_PRE_GEN_CFG_HASH_MODE_MASK I40E_MASK(0x3, I40E_GL_PRE_GEN_CFG_HASH_MODE_SHIFT) + +#define I40E_GL_PRE_PRX_BIG_ENT_D0 0x002699C4 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_IDX_SHIFT 0 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_SEL_SHIFT 6 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_VLD_SHIFT 7 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F0_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_IDX_SHIFT 8 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_SEL_SHIFT 14 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_VLD_SHIFT 15 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F1_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_VLD_SHIFT 16 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_SEL_SHIFT 22 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_IDX_SHIFT 23 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F2_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_VLD_SHIFT 24 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_IDX_SHIFT 31 +#define I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D0_F3_SRC_IDX_SHIFT) + +#define I40E_GL_PRE_PRX_BIG_ENT_D1 0x002699D4 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_IDX_SHIFT 0 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_SEL_SHIFT 6 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_VLD_SHIFT 7 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F4_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_IDX_SHIFT 8 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_SEL_SHIFT 14 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_VLD_SHIFT 15 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F5_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_VLD_SHIFT 16 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_SEL_SHIFT 22 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_SEL_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_IDX_SHIFT 23 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F6_SRC_IDX_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_VLD_SHIFT 24 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_VLD_SHIFT) +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_IDX_SHIFT 31 +#define I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_BIG_ENT_D1_F7_SRC_IDX_SHIFT) + +#define I40E_GL_PRE_PRX_BIG_ENT_D3 0x00269A0C /* Reset: CORER */ +#define I40E_GL_PRE_PRX_BIG_ENT_D3_BIT_MSK0_SHIFT 0 +#define I40E_GL_PRE_PRX_BIG_ENT_D3_BIT_MSK0_MASK I40E_MASK(0xFF, I40E_GL_PRE_PRX_BIG_ENT_D3_BIT_MSK0_SHIFT) + +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D1 0x00269A34 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D1_H1_SHIFT 0 +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D1_H1_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_PRE_PRX_BIG_HSH_KEY_D1_H1_SHIFT) + +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D3 0x00269A54 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D3_H3_SHIFT 0 +#define I40E_GL_PRE_PRX_BIG_HSH_KEY_D3_H3_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_PRE_PRX_BIG_HSH_KEY_D3_H3_SHIFT) + +#define I40E_GL_PRE_PRX_H_PHASE0 0x00269B74 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_H_PHASE0_PROTOCOL_ID_SHIFT 0 +#define I40E_GL_PRE_PRX_H_PHASE0_PROTOCOL_ID_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_H_PHASE0_PROTOCOL_ID_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE0_IGNORE_PROTOCOL_SHIFT 6 +#define I40E_GL_PRE_PRX_H_PHASE0_IGNORE_PROTOCOL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_H_PHASE0_IGNORE_PROTOCOL_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE0_MASK0_INDEX_SHIFT 8 +#define I40E_GL_PRE_PRX_H_PHASE0_MASK0_INDEX_MASK I40E_MASK(0xF, I40E_GL_PRE_PRX_H_PHASE0_MASK0_INDEX_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE0_MASK1_INDEX_SHIFT 12 +#define I40E_GL_PRE_PRX_H_PHASE0_MASK1_INDEX_MASK I40E_MASK(0xF, I40E_GL_PRE_PRX_H_PHASE0_MASK1_INDEX_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE0_MASK0_BITS_SHIFT 16 +#define I40E_GL_PRE_PRX_H_PHASE0_MASK0_BITS_MASK I40E_MASK(0xFF, I40E_GL_PRE_PRX_H_PHASE0_MASK0_BITS_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE0_MASK1_BITS_SHIFT 24 +#define I40E_GL_PRE_PRX_H_PHASE0_MASK1_BITS_MASK I40E_MASK(0xFF, I40E_GL_PRE_PRX_H_PHASE0_MASK1_BITS_SHIFT) + +#define I40E_GL_PRE_PRX_H_PHASE1 0x00269B7C /* Reset: CORER */ +#define I40E_GL_PRE_PRX_H_PHASE1_PROTOCOL_ID_SHIFT 0 +#define I40E_GL_PRE_PRX_H_PHASE1_PROTOCOL_ID_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_H_PHASE1_PROTOCOL_ID_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE1_IGNORE_PROTOCOL_SHIFT 6 +#define I40E_GL_PRE_PRX_H_PHASE1_IGNORE_PROTOCOL_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_H_PHASE1_IGNORE_PROTOCOL_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE1_MASK0_INDEX_SHIFT 8 +#define I40E_GL_PRE_PRX_H_PHASE1_MASK0_INDEX_MASK I40E_MASK(0xF, I40E_GL_PRE_PRX_H_PHASE1_MASK0_INDEX_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE1_MASK1_INDEX_SHIFT 12 +#define I40E_GL_PRE_PRX_H_PHASE1_MASK1_INDEX_MASK I40E_MASK(0xF, I40E_GL_PRE_PRX_H_PHASE1_MASK1_INDEX_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE1_MASK0_BITS_SHIFT 16 +#define I40E_GL_PRE_PRX_H_PHASE1_MASK0_BITS_MASK I40E_MASK(0xFF, I40E_GL_PRE_PRX_H_PHASE1_MASK0_BITS_SHIFT) +#define I40E_GL_PRE_PRX_H_PHASE1_MASK1_BITS_SHIFT 24 +#define I40E_GL_PRE_PRX_H_PHASE1_MASK1_BITS_MASK I40E_MASK(0xFF, I40E_GL_PRE_PRX_H_PHASE1_MASK1_BITS_SHIFT) + +#define I40E_GL_PRE_PRX_HSH_KEY_D0 0x00269A24 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_HSH_KEY_D0_H0_SHIFT 0 +#define I40E_GL_PRE_PRX_HSH_KEY_D0_H0_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_PRE_PRX_HSH_KEY_D0_H0_SHIFT) + +#define I40E_GL_PRE_PRX_L_PHASE0 0x00269B8C /* Reset: CORER */ +#define I40E_GL_PRE_PRX_L_PHASE0_W0_OFFSET_SHIFT 0 +#define I40E_GL_PRE_PRX_L_PHASE0_W0_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE0_W0_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W0_STATUS_SHIFT 6 +#define I40E_GL_PRE_PRX_L_PHASE0_W0_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W0_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W0_VALID_SHIFT 7 +#define I40E_GL_PRE_PRX_L_PHASE0_W0_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W0_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W1_OFFSET_SHIFT 8 +#define I40E_GL_PRE_PRX_L_PHASE0_W1_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE0_W1_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W1_STATUS_SHIFT 14 +#define I40E_GL_PRE_PRX_L_PHASE0_W1_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W1_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W1_VALID_SHIFT 15 +#define I40E_GL_PRE_PRX_L_PHASE0_W1_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W1_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W2_OFFSET_SHIFT 16 +#define I40E_GL_PRE_PRX_L_PHASE0_W2_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE0_W2_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W2_STATUS_SHIFT 22 +#define I40E_GL_PRE_PRX_L_PHASE0_W2_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W2_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_W2_VALID_SHIFT 23 +#define I40E_GL_PRE_PRX_L_PHASE0_W2_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_W2_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_ETYPE_ENABLE_SHIFT 28 +#define I40E_GL_PRE_PRX_L_PHASE0_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_ETYPE_ENABLE_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_PRUNE_SHIFT 29 +#define I40E_GL_PRE_PRX_L_PHASE0_PRUNE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_PRUNE_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_EGRESS_SHIFT 30 +#define I40E_GL_PRE_PRX_L_PHASE0_EGRESS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_EGRESS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE0_PORT_ENABLE_SHIFT 31 +#define I40E_GL_PRE_PRX_L_PHASE0_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE0_PORT_ENABLE_SHIFT) + +#define I40E_GL_PRE_PRX_L_PHASE1 0x00269B84 /* Reset: CORER */ +#define I40E_GL_PRE_PRX_L_PHASE1_W0_OFFSET_SHIFT 0 +#define I40E_GL_PRE_PRX_L_PHASE1_W0_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE1_W0_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W0_STATUS_SHIFT 6 +#define I40E_GL_PRE_PRX_L_PHASE1_W0_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W0_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W0_VALID_SHIFT 7 +#define I40E_GL_PRE_PRX_L_PHASE1_W0_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W0_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W1_OFFSET_SHIFT 8 +#define I40E_GL_PRE_PRX_L_PHASE1_W1_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE1_W1_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W1_STATUS_SHIFT 14 +#define I40E_GL_PRE_PRX_L_PHASE1_W1_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W1_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W1_VALID_SHIFT 15 +#define I40E_GL_PRE_PRX_L_PHASE1_W1_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W1_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W2_OFFSET_SHIFT 16 +#define I40E_GL_PRE_PRX_L_PHASE1_W2_OFFSET_MASK I40E_MASK(0x3F, I40E_GL_PRE_PRX_L_PHASE1_W2_OFFSET_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W2_STATUS_SHIFT 22 +#define I40E_GL_PRE_PRX_L_PHASE1_W2_STATUS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W2_STATUS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_W2_VALID_SHIFT 23 +#define I40E_GL_PRE_PRX_L_PHASE1_W2_VALID_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_W2_VALID_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_ETYPE_ENABLE_SHIFT 28 +#define I40E_GL_PRE_PRX_L_PHASE1_ETYPE_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_ETYPE_ENABLE_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_PRUNE_SHIFT 29 +#define I40E_GL_PRE_PRX_L_PHASE1_PRUNE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_PRUNE_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_EGRESS_SHIFT 30 +#define I40E_GL_PRE_PRX_L_PHASE1_EGRESS_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_EGRESS_SHIFT) +#define I40E_GL_PRE_PRX_L_PHASE1_PORT_ENABLE_SHIFT 31 +#define I40E_GL_PRE_PRX_L_PHASE1_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_GL_PRE_PRX_L_PHASE1_PORT_ENABLE_SHIFT) + +#define I40E_GL_SW_SWT_STS(_i) (0x00269340 + ((_i) * 4)) /* _i=0...9 */ /* Reset: CORER */ +#define I40E_GL_SW_SWT_STS_MAX_INDEX 9 +#define I40E_GL_SW_SWT_STS_EMP_SWT_STS_SHIFT 0 +#define I40E_GL_SW_SWT_STS_EMP_SWT_STS_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SW_SWT_STS_EMP_SWT_STS_SHIFT) + +#define I40E_GL_SWR_FILTERS_NEED_HIT(_i) (0x0026CF00 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GL_SWR_FILTERS_NEED_HIT_MAX_INDEX 1 +#define I40E_GL_SWR_FILTERS_NEED_HIT_FILTERS_NEED_HIT_SHIFT 0 +#define I40E_GL_SWR_FILTERS_NEED_HIT_FILTERS_NEED_HIT_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_FILTERS_NEED_HIT_FILTERS_NEED_HIT_SHIFT) + +#define I40E_GL_SWR_FILTERS_NEED_MISS(_i) (0x0026CF10 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GL_SWR_FILTERS_NEED_MISS_MAX_INDEX 1 +#define I40E_GL_SWR_FILTERS_NEED_MISS_FILTERS_NEED_MISS_SHIFT 0 +#define I40E_GL_SWR_FILTERS_NEED_MISS_FILTERS_NEED_MISS_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_FILTERS_NEED_MISS_FILTERS_NEED_MISS_SHIFT) + +#define I40E_GL_SWR_HIT_FILTERS(_i) (0x0026CF08 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GL_SWR_HIT_FILTERS_MAX_INDEX 1 +#define I40E_GL_SWR_HIT_FILTERS_HIT_FILTERS_SHIFT 0 +#define I40E_GL_SWR_HIT_FILTERS_HIT_FILTERS_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_HIT_FILTERS_HIT_FILTERS_SHIFT) + +#define I40E_GL_SWR_MISS_FILTERS(_i) (0x0026CF18 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GL_SWR_MISS_FILTERS_MAX_INDEX 1 +#define I40E_GL_SWR_MISS_FILTERS_MISS_FILTERS_SHIFT 0 +#define I40E_GL_SWR_MISS_FILTERS_MISS_FILTERS_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_MISS_FILTERS_MISS_FILTERS_SHIFT) + +#define I40E_GL_SWR_PRI_JOIN_MAP(_i) (0x0026CE20 + ((_i) * 4)) /* _i=0...8 */ /* Reset: CORER */ +#define I40E_GL_SWR_PRI_JOIN_MAP_MAX_INDEX 8 +#define I40E_GL_SWR_PRI_JOIN_MAP_GL_SWR_PRI_MAP_SHIFT 0 +#define I40E_GL_SWR_PRI_JOIN_MAP_GL_SWR_PRI_MAP_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_PRI_JOIN_MAP_GL_SWR_PRI_MAP_SHIFT) + +#define I40E_GL_SWR_PRI_MAP(_i) (0x0026CDE0 + ((_i) * 4)) /* _i=0...8 */ /* Reset: CORER */ +#define I40E_GL_SWR_PRI_MAP_MAX_INDEX 8 +#define I40E_GL_SWR_PRI_MAP_GL_SWR_PRI_MAP_SHIFT 0 +#define I40E_GL_SWR_PRI_MAP_GL_SWR_PRI_MAP_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_PRI_MAP_GL_SWR_PRI_MAP_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0 0x002699BC /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_IDX_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_SEL_SHIFT 6 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_VLD_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F0_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_IDX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_SEL_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_VLD_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F1_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_VLD_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_SEL_SHIFT 22 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_IDX_SHIFT 23 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F2_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_VLD_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_IDX_SHIFT 31 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D0_F3_SRC_IDX_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1 0x002699CC /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_IDX_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_SEL_SHIFT 6 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_VLD_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F4_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_IDX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_SEL_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_VLD_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F5_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_VLD_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_SEL_SHIFT 22 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_IDX_SHIFT 23 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F6_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_VLD_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_IDX_SHIFT 31 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D1_F7_SRC_IDX_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2 0x002699FC /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PHASE_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PHASE_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PHASE_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_INGR_SHIFT 1 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_INGR_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_INGR_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PORT_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PORT_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_PORT_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_TR_INDEX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_TR_INDEX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_TR_INDEX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_MAN_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_MAN_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_MAN_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_TR_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_TR_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_USE_TR_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK0_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK0_MASK I40E_MASK(0xF, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK0_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK1_SHIFT 20 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK1_MASK I40E_MASK(0xF, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BYTE_MSK1_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BIT_MSK0_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BIT_MSK0_MASK I40E_MASK(0xFF, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D2_BIT_MSK0_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D3 0x00269A14 /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D3_BIT_MSK0_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D3_BIT_MSK0_MASK I40E_MASK(0xFF, I40E_GL_SWT_FLU_BIG_ENT_PHASE0_D3_BIT_MSK0_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0 0x002699DC /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_IDX_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_SEL_SHIFT 6 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_VLD_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F0_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_IDX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_SEL_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_VLD_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F1_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_VLD_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_SEL_SHIFT 22 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_IDX_SHIFT 23 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F2_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_VLD_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_IDX_SHIFT 31 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D0_F3_SRC_IDX_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1 0x002699E4 /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_IDX_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_SEL_SHIFT 6 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_VLD_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F4_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_IDX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_IDX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_SEL_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_VLD_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_VLD_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F5_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_VLD_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_SEL_SHIFT 22 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_SEL_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_SEL_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_IDX_SHIFT 23 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F6_SRC_IDX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_VLD_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_VLD_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_VLD_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_IDX_SHIFT 31 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_IDX_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D1_F7_SRC_IDX_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2 0x002699F4 /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PHASE_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PHASE_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PHASE_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_INGR_SHIFT 1 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_INGR_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_INGR_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PORT_SHIFT 7 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PORT_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_PORT_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_TR_INDEX_SHIFT 8 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_TR_INDEX_MASK I40E_MASK(0x3F, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_TR_INDEX_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_MAN_SHIFT 14 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_MAN_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_MAN_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_TR_SHIFT 15 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_TR_MASK I40E_MASK(0x1, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_USE_TR_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK0_SHIFT 16 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK0_MASK I40E_MASK(0xF, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK0_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK1_SHIFT 20 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK1_MASK I40E_MASK(0xF, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BYTE_MSK1_SHIFT) +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BIT_MSK0_SHIFT 24 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BIT_MSK0_MASK I40E_MASK(0xFF, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D2_BIT_MSK0_SHIFT) + +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D3 0x00269A04 /* Reset: CORER */ +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D3_BIT_MSK0_SHIFT 0 +#define I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D3_BIT_MSK0_MASK I40E_MASK(0xFF, I40E_GL_SWT_FLU_BIG_ENT_PHASE1_D3_BIT_MSK0_SHIFT) + +#define I40E_GL_SWT_LOCMD_PE(_i) (0x002694A0 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_GL_SWT_LOCMD_PE_MAX_INDEX 7 +#define I40E_GL_SWT_LOCMD_PE_COMMAND_SHIFT 0 +#define I40E_GL_SWT_LOCMD_PE_COMMAND_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWT_LOCMD_PE_COMMAND_SHIFT) + +#define I40E_GL_SWT_LOCMD_SW(_i) (0x002694E0 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_GL_SWT_LOCMD_SW_MAX_INDEX 7 +#define I40E_GL_SWT_LOCMD_SW_COMMAND_SHIFT 0 +#define I40E_GL_SWT_LOCMD_SW_COMMAND_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWT_LOCMD_SW_COMMAND_SHIFT) + +#define I40E_GL_SWT_LOFV_PE(_i) (0x00268E80 + ((_i) * 4)) /* _i=0...31 */ /* Reset: CORER */ +#define I40E_GL_SWT_LOFV_PE_MAX_INDEX 31 +#define I40E_GL_SWT_LOFV_PE_FIELDVECTOR_SHIFT 0 +#define I40E_GL_SWT_LOFV_PE_FIELDVECTOR_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWT_LOFV_PE_FIELDVECTOR_SHIFT) + +#define I40E_GL_SWT_LOFV_SW(_i) (0x00268F80 + ((_i) * 4)) /* _i=0...31 */ /* Reset: CORER */ +#define I40E_GL_SWT_LOFV_SW_MAX_INDEX 31 +#define I40E_GL_SWT_LOFV_SW_FIELDVECTOR_SHIFT 0 +#define I40E_GL_SWT_LOFV_SW_FIELDVECTOR_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWT_LOFV_SW_FIELDVECTOR_SHIFT) + +#define I40E_PRT_MSCCNT 0x00256BA0 /* Reset: CORER */ +#define I40E_PRT_MSCCNT_CCOUNT_SHIFT 0 +#define I40E_PRT_MSCCNT_CCOUNT_MASK I40E_MASK(0x1FFFFFF, I40E_PRT_MSCCNT_CCOUNT_SHIFT) + +#define I40E_PRT_SBPVSI 0x00256BE0 /* Reset: CORER */ +#define I40E_PRT_SBPVSI_BAD_FRAMES_VSI_SHIFT 0 +#define I40E_PRT_SBPVSI_BAD_FRAMES_VSI_MASK I40E_MASK(0x1FF, I40E_PRT_SBPVSI_BAD_FRAMES_VSI_SHIFT) +#define I40E_PRT_SBPVSI_SBP_SHIFT 31 +#define I40E_PRT_SBPVSI_SBP_MASK I40E_MASK(0x1, I40E_PRT_SBPVSI_SBP_SHIFT) + +#define I40E_PRT_SCSTS 0x00256C20 /* Reset: CORER */ +#define I40E_PRT_SCSTS_BSCA_SHIFT 0 +#define I40E_PRT_SCSTS_BSCA_MASK I40E_MASK(0x1, I40E_PRT_SCSTS_BSCA_SHIFT) +#define I40E_PRT_SCSTS_BSCAP_SHIFT 1 +#define I40E_PRT_SCSTS_BSCAP_MASK I40E_MASK(0x1, I40E_PRT_SCSTS_BSCAP_SHIFT) +#define I40E_PRT_SCSTS_MSCA_SHIFT 2 +#define I40E_PRT_SCSTS_MSCA_MASK I40E_MASK(0x1, I40E_PRT_SCSTS_MSCA_SHIFT) +#define I40E_PRT_SCSTS_MSCAP_SHIFT 3 +#define I40E_PRT_SCSTS_MSCAP_MASK I40E_MASK(0x1, I40E_PRT_SCSTS_MSCAP_SHIFT) + +#define I40E_PRT_SWT_BSCCNT 0x00256C60 /* Reset: CORER */ +#define I40E_PRT_SWT_BSCCNT_CCOUNT_SHIFT 0 +#define I40E_PRT_SWT_BSCCNT_CCOUNT_MASK I40E_MASK(0x1FFFFFF, I40E_PRT_SWT_BSCCNT_CCOUNT_SHIFT) + +#define I40E_PRT_SWT_BSCTRH 0x00256CA0 /* Reset: CORER */ +#define I40E_PRT_SWT_BSCTRH_UTRESH_SHIFT 0 +#define I40E_PRT_SWT_BSCTRH_UTRESH_MASK I40E_MASK(0x7FFFF, I40E_PRT_SWT_BSCTRH_UTRESH_SHIFT) + +#define I40E_PRT_SWT_DEFPORTS 0x00256CE0 /* Reset: CORER */ +#define I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_SHIFT 0 +#define I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_MASK I40E_MASK(0x1FF, I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_SHIFT) +#define I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_VALID_SHIFT 31 +#define I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_VALID_MASK I40E_MASK(0x1, I40E_PRT_SWT_DEFPORTS_DEFAULT_VSI_VALID_SHIFT) + +#define I40E_PRT_SWT_MSCTRH 0x00256D20 /* Reset: CORER */ +#define I40E_PRT_SWT_MSCTRH_UTRESH_SHIFT 0 +#define I40E_PRT_SWT_MSCTRH_UTRESH_MASK I40E_MASK(0x7FFFF, I40E_PRT_SWT_MSCTRH_UTRESH_SHIFT) + +#define I40E_PRT_SWT_SCBI 0x00256D60 /* Reset: CORER */ +#define I40E_PRT_SWT_SCBI_BI_SHIFT 0 +#define I40E_PRT_SWT_SCBI_BI_MASK I40E_MASK(0x1FFFFFF, I40E_PRT_SWT_SCBI_BI_SHIFT) + +#define I40E_PRT_SWT_SCCRL 0x00256DA0 /* Reset: CORER */ +#define I40E_PRT_SWT_SCCRL_MDIPW_SHIFT 0 +#define I40E_PRT_SWT_SCCRL_MDIPW_MASK I40E_MASK(0x1, I40E_PRT_SWT_SCCRL_MDIPW_SHIFT) +#define I40E_PRT_SWT_SCCRL_MDICW_SHIFT 1 +#define I40E_PRT_SWT_SCCRL_MDICW_MASK I40E_MASK(0x1, I40E_PRT_SWT_SCCRL_MDICW_SHIFT) +#define I40E_PRT_SWT_SCCRL_BDIPW_SHIFT 2 +#define I40E_PRT_SWT_SCCRL_BDIPW_MASK I40E_MASK(0x1, I40E_PRT_SWT_SCCRL_BDIPW_SHIFT) +#define I40E_PRT_SWT_SCCRL_BDICW_SHIFT 3 +#define I40E_PRT_SWT_SCCRL_BDICW_MASK I40E_MASK(0x1, I40E_PRT_SWT_SCCRL_BDICW_SHIFT) +#define I40E_PRT_SWT_SCCRL_BIDU_SHIFT 4 +#define I40E_PRT_SWT_SCCRL_BIDU_MASK I40E_MASK(0x1, I40E_PRT_SWT_SCCRL_BIDU_SHIFT) +#define I40E_PRT_SWT_SCCRL_INTERVAL_SHIFT 8 +#define I40E_PRT_SWT_SCCRL_INTERVAL_MASK I40E_MASK(0x3FF, I40E_PRT_SWT_SCCRL_INTERVAL_SHIFT) + +#define I40E_PRT_SWT_SCTC 0x00256DE0 /* Reset: CORER */ +#define I40E_PRT_SWT_SCTC_COUNT_SHIFT 0 +#define I40E_PRT_SWT_SCTC_COUNT_MASK I40E_MASK(0x3FF, I40E_PRT_SWT_SCTC_COUNT_SHIFT) + +#define I40E_PRT_SWT_SWITCHID 0x00256E20 /* Reset: CORER */ +#define I40E_PRT_SWT_SWITCHID_SWID_SHIFT 0 +#define I40E_PRT_SWT_SWITCHID_SWID_MASK I40E_MASK(0xFFF, I40E_PRT_SWT_SWITCHID_SWID_SHIFT) +#define I40E_PRT_SWT_SWITCHID_ISNSTAG_SHIFT 12 +#define I40E_PRT_SWT_SWITCHID_ISNSTAG_MASK I40E_MASK(0x1, I40E_PRT_SWT_SWITCHID_ISNSTAG_SHIFT) +#define I40E_PRT_SWT_SWITCHID_SWIDVALID_SHIFT 13 +#define I40E_PRT_SWT_SWITCHID_SWIDVALID_MASK I40E_MASK(0x1, I40E_PRT_SWT_SWITCHID_SWIDVALID_SHIFT) +#define I40E_PRT_SWT_SWITCHID_FORWARD_MUTICAST_ETAG_SHIFT 31 +#define I40E_PRT_SWT_SWITCHID_FORWARD_MUTICAST_ETAG_MASK I40E_MASK(0x1, I40E_PRT_SWT_SWITCHID_FORWARD_MUTICAST_ETAG_SHIFT) + +#define I40E_PRT_TCTUPR(_i) (0x00044000 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRT_TCTUPR_MAX_INDEX 7 +#define I40E_PRT_TCTUPR_UP0_SHIFT 0 +#define I40E_PRT_TCTUPR_UP0_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP0_SHIFT) +#define I40E_PRT_TCTUPR_UP1_SHIFT 3 +#define I40E_PRT_TCTUPR_UP1_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP1_SHIFT) +#define I40E_PRT_TCTUPR_UP2_SHIFT 6 +#define I40E_PRT_TCTUPR_UP2_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP2_SHIFT) +#define I40E_PRT_TCTUPR_UP3_SHIFT 9 +#define I40E_PRT_TCTUPR_UP3_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP3_SHIFT) +#define I40E_PRT_TCTUPR_UP4_SHIFT 12 +#define I40E_PRT_TCTUPR_UP4_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP4_SHIFT) +#define I40E_PRT_TCTUPR_UP5_SHIFT 15 +#define I40E_PRT_TCTUPR_UP5_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP5_SHIFT) +#define I40E_PRT_TCTUPR_UP6_SHIFT 18 +#define I40E_PRT_TCTUPR_UP6_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP6_SHIFT) +#define I40E_PRT_TCTUPR_UP7_SHIFT 21 +#define I40E_PRT_TCTUPR_UP7_MASK I40E_MASK(0x7, I40E_PRT_TCTUPR_UP7_SHIFT) + +/* PF - TimeSync (IEEE 1588) Registers */ + +#define I40E_PRTTSYN_VFTIME_H 0x001E4020 /* Reset: GLOBR */ +#define I40E_PRTTSYN_VFTIME_H_TSYNTIME_H_SHIFT 0 +#define I40E_PRTTSYN_VFTIME_H_TSYNTIME_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_VFTIME_H_TSYNTIME_H_SHIFT) + +#define I40E_PRTTSYN_VFTIME_L 0x001E4000 /* Reset: GLOBR */ +#define I40E_PRTTSYN_VFTIME_L_TSYNTIME_L_SHIFT 0 +#define I40E_PRTTSYN_VFTIME_L_TSYNTIME_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_VFTIME_L_TSYNTIME_L_SHIFT) + +/* PF - Transmit Scheduler Registers */ + +#define I40E_GLSCD_BWLCREDUPDATE 0x000B2148 /* Reset: CORER */ +#define I40E_GLSCD_BWLCREDUPDATE_BWLCREDUPDATE_SHIFT 0 +#define I40E_GLSCD_BWLCREDUPDATE_BWLCREDUPDATE_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_BWLCREDUPDATE_BWLCREDUPDATE_SHIFT) + +#define I40E_GLSCD_BWLLINESPERARB 0x000B214C /* Reset: CORER */ +#define I40E_GLSCD_BWLLINESPERARB_BWLLINESPERARB_SHIFT 0 +#define I40E_GLSCD_BWLLINESPERARB_BWLLINESPERARB_MASK I40E_MASK(0x7FF, I40E_GLSCD_BWLLINESPERARB_BWLLINESPERARB_SHIFT) + +#define I40E_GLSCD_CREDITSPERQUANTA 0x000B2144 /* Reset: CORER */ +#define I40E_GLSCD_CREDITSPERQUANTA_TSCDCREDITSPERQUANTA_SHIFT 0 +#define I40E_GLSCD_CREDITSPERQUANTA_TSCDCREDITSPERQUANTA_MASK I40E_MASK(0xFFFF, I40E_GLSCD_CREDITSPERQUANTA_TSCDCREDITSPERQUANTA_SHIFT) + +#define I40E_GLSCD_ERRSTATREG 0x000B2150 /* Reset: CORER */ +#define I40E_GLSCD_ERRSTATREG_LOOP_DETECTED_SHIFT 0 +#define I40E_GLSCD_ERRSTATREG_LOOP_DETECTED_MASK I40E_MASK(0x1, I40E_GLSCD_ERRSTATREG_LOOP_DETECTED_SHIFT) +#define I40E_GLSCD_ERRSTATREG_SHRTBWLIMUPDATEPER_SHIFT 1 +#define I40E_GLSCD_ERRSTATREG_SHRTBWLIMUPDATEPER_MASK I40E_MASK(0x1, I40E_GLSCD_ERRSTATREG_SHRTBWLIMUPDATEPER_SHIFT) + +#define I40E_GLSCD_IFBCMDH 0x000B20A0 /* Reset: CORER */ +#define I40E_GLSCD_IFBCMDH_FLDOFFS_NUMENTS_SHIFT 0 +#define I40E_GLSCD_IFBCMDH_FLDOFFS_NUMENTS_MASK I40E_MASK(0x7F, I40E_GLSCD_IFBCMDH_FLDOFFS_NUMENTS_SHIFT) +#define I40E_GLSCD_IFBCMDH_FLDSZ_SHIFT 7 +#define I40E_GLSCD_IFBCMDH_FLDSZ_MASK I40E_MASK(0x1F, I40E_GLSCD_IFBCMDH_FLDSZ_SHIFT) +#define I40E_GLSCD_IFBCMDH_VALUE_ENTRYIDX_SHIFT 12 +#define I40E_GLSCD_IFBCMDH_VALUE_ENTRYIDX_MASK I40E_MASK(0x7FFFF, I40E_GLSCD_IFBCMDH_VALUE_ENTRYIDX_SHIFT) +#define I40E_GLSCD_IFBCMDH_RSVD_SHIFT 31 +#define I40E_GLSCD_IFBCMDH_RSVD_MASK I40E_MASK(0x1, I40E_GLSCD_IFBCMDH_RSVD_SHIFT) + +#define I40E_GLSCD_IFBCMDL 0x000B209c /* Reset: CORER */ +#define I40E_GLSCD_IFBCMDL_OPCODE_SHIFT 0 +#define I40E_GLSCD_IFBCMDL_OPCODE_MASK I40E_MASK(0xF, I40E_GLSCD_IFBCMDL_OPCODE_SHIFT) +#define I40E_GLSCD_IFBCMDL_TBLTYPE_SHIFT 4 +#define I40E_GLSCD_IFBCMDL_TBLTYPE_MASK I40E_MASK(0xF, I40E_GLSCD_IFBCMDL_TBLTYPE_SHIFT) +#define I40E_GLSCD_IFBCMDL_TBLENTRYIDX_SHIFT 8 +#define I40E_GLSCD_IFBCMDL_TBLENTRYIDX_MASK I40E_MASK(0x7FF, I40E_GLSCD_IFBCMDL_TBLENTRYIDX_SHIFT) +#define I40E_GLSCD_IFBCMDL_CTRLTYPE_SHIFT 19 +#define I40E_GLSCD_IFBCMDL_CTRLTYPE_MASK I40E_MASK(0x7, I40E_GLSCD_IFBCMDL_CTRLTYPE_SHIFT) +#define I40E_GLSCD_IFBCMDL_RSVD_SHIFT 22 +#define I40E_GLSCD_IFBCMDL_RSVD_MASK I40E_MASK(0x3FF, I40E_GLSCD_IFBCMDL_RSVD_SHIFT) + +#define I40E_GLSCD_IFCTRL 0x000B20A8 /* Reset: CORER */ +#define I40E_GLSCD_IFCTRL_BCMDDB_SHIFT 0 +#define I40E_GLSCD_IFCTRL_BCMDDB_MASK I40E_MASK(0x1, I40E_GLSCD_IFCTRL_BCMDDB_SHIFT) +#define I40E_GLSCD_IFCTRL_ICMDCLRERR_SHIFT 1 +#define I40E_GLSCD_IFCTRL_ICMDCLRERR_MASK I40E_MASK(0x1, I40E_GLSCD_IFCTRL_ICMDCLRERR_SHIFT) +#define I40E_GLSCD_IFCTRL_BCMDCLRERR_SHIFT 2 +#define I40E_GLSCD_IFCTRL_BCMDCLRERR_MASK I40E_MASK(0x1, I40E_GLSCD_IFCTRL_BCMDCLRERR_SHIFT) +#define I40E_GLSCD_IFCTRL_SCH_ENA_SHIFT 3 +#define I40E_GLSCD_IFCTRL_SCH_ENA_MASK I40E_MASK(0x1, I40E_GLSCD_IFCTRL_SCH_ENA_SHIFT) +#define I40E_GLSCD_IFCTRL_SMALL_CRED_DISABLE_SHIFT 4 +#define I40E_GLSCD_IFCTRL_SMALL_CRED_DISABLE_MASK I40E_MASK(0x1, I40E_GLSCD_IFCTRL_SMALL_CRED_DISABLE_SHIFT) + +#define I40E_GLSCD_IFDATA(_i) (0x000B2084 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLSCD_IFDATA_MAX_INDEX 3 +#define I40E_GLSCD_IFDATA_TSCDIFDATA_SHIFT 0 +#define I40E_GLSCD_IFDATA_TSCDIFDATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_IFDATA_TSCDIFDATA_SHIFT) + +#define I40E_GLSCD_IFICMDH 0x000B2098 /* Reset: CORER */ +#define I40E_GLSCD_IFICMDH_FLDOFFS_NUMENTS_SHIFT 0 +#define I40E_GLSCD_IFICMDH_FLDOFFS_NUMENTS_MASK I40E_MASK(0x7F, I40E_GLSCD_IFICMDH_FLDOFFS_NUMENTS_SHIFT) +#define I40E_GLSCD_IFICMDH_FLDSZ_SHIFT 7 +#define I40E_GLSCD_IFICMDH_FLDSZ_MASK I40E_MASK(0x1F, I40E_GLSCD_IFICMDH_FLDSZ_SHIFT) +#define I40E_GLSCD_IFICMDH_VALUE_ENTRYIDX_SHIFT 12 +#define I40E_GLSCD_IFICMDH_VALUE_ENTRYIDX_MASK I40E_MASK(0x7FFFF, I40E_GLSCD_IFICMDH_VALUE_ENTRYIDX_SHIFT) +#define I40E_GLSCD_IFICMDH_RSVD_SHIFT 31 +#define I40E_GLSCD_IFICMDH_RSVD_MASK I40E_MASK(0x1, I40E_GLSCD_IFICMDH_RSVD_SHIFT) + +#define I40E_GLSCD_IFICMDL 0x000B2094 /* Reset: CORER */ +#define I40E_GLSCD_IFICMDL_OPCODE_SHIFT 0 +#define I40E_GLSCD_IFICMDL_OPCODE_MASK I40E_MASK(0xF, I40E_GLSCD_IFICMDL_OPCODE_SHIFT) +#define I40E_GLSCD_IFICMDL_TBLTYPE_SHIFT 4 +#define I40E_GLSCD_IFICMDL_TBLTYPE_MASK I40E_MASK(0xF, I40E_GLSCD_IFICMDL_TBLTYPE_SHIFT) +#define I40E_GLSCD_IFICMDL_TBLENTRYIDX_SHIFT 8 +#define I40E_GLSCD_IFICMDL_TBLENTRYIDX_MASK I40E_MASK(0x7FF, I40E_GLSCD_IFICMDL_TBLENTRYIDX_SHIFT) +#define I40E_GLSCD_IFICMDL_CTRLTYPE_SHIFT 19 +#define I40E_GLSCD_IFICMDL_CTRLTYPE_MASK I40E_MASK(0x7, I40E_GLSCD_IFICMDL_CTRLTYPE_SHIFT) +#define I40E_GLSCD_IFICMDL_RSVD_SHIFT 22 +#define I40E_GLSCD_IFICMDL_RSVD_MASK I40E_MASK(0x3FF, I40E_GLSCD_IFICMDL_RSVD_SHIFT) + +#define I40E_GLSCD_IFSTATUS 0x000B20A4 /* Reset: CORER */ +#define I40E_GLSCD_IFSTATUS_ENTRAVAIL_SHIFT 0 +#define I40E_GLSCD_IFSTATUS_ENTRAVAIL_MASK I40E_MASK(0x3F, I40E_GLSCD_IFSTATUS_ENTRAVAIL_SHIFT) +#define I40E_GLSCD_IFSTATUS_ICMDBZ_SHIFT 6 +#define I40E_GLSCD_IFSTATUS_ICMDBZ_MASK I40E_MASK(0x1, I40E_GLSCD_IFSTATUS_ICMDBZ_SHIFT) +#define I40E_GLSCD_IFSTATUS_ICMDERR_SHIFT 7 +#define I40E_GLSCD_IFSTATUS_ICMDERR_MASK I40E_MASK(0x1, I40E_GLSCD_IFSTATUS_ICMDERR_SHIFT) +#define I40E_GLSCD_IFSTATUS_BCMDERR_SHIFT 8 +#define I40E_GLSCD_IFSTATUS_BCMDERR_MASK I40E_MASK(0x1, I40E_GLSCD_IFSTATUS_BCMDERR_SHIFT) +#define I40E_GLSCD_IFSTATUS_SCH_ENA_SHIFT 9 +#define I40E_GLSCD_IFSTATUS_SCH_ENA_MASK I40E_MASK(0x1, I40E_GLSCD_IFSTATUS_SCH_ENA_SHIFT) +#define I40E_GLSCD_IFSTATUS_RSVD_SHIFT 10 +#define I40E_GLSCD_IFSTATUS_RSVD_MASK I40E_MASK(0x3FFFFF, I40E_GLSCD_IFSTATUS_RSVD_SHIFT) + +#define I40E_GLSCD_INCSCHEDCFGCOUNT 0x000B2140 /* Reset: CORER */ +#define I40E_GLSCD_INCSCHEDCFGCOUNT_INCSCHEDCFGCOUNT_SHIFT 0 +#define I40E_GLSCD_INCSCHEDCFGCOUNT_INCSCHEDCFGCOUNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_INCSCHEDCFGCOUNT_INCSCHEDCFGCOUNT_SHIFT) + +#define I40E_GLSCD_LANTCBCMDS 0x000B2154 /* Reset: CORER */ +#define I40E_GLSCD_LANTCBCMDS_NUMLANTCBCMDS_SHIFT 0 +#define I40E_GLSCD_LANTCBCMDS_NUMLANTCBCMDS_MASK I40E_MASK(0x7F, I40E_GLSCD_LANTCBCMDS_NUMLANTCBCMDS_SHIFT) + +#define I40E_GLSCD_LLPREALTHRESH 0x000B213C /* Reset: CORER */ +#define I40E_GLSCD_LLPREALTHRESH_LLPREALTHRESH_SHIFT 0 +#define I40E_GLSCD_LLPREALTHRESH_LLPREALTHRESH_MASK I40E_MASK(0xF, I40E_GLSCD_LLPREALTHRESH_LLPREALTHRESH_SHIFT) + +#define I40E_GLSCD_PRGPERFCONTROL(_i) (0x000B20FC + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSCD_PRGPERFCONTROL_MAX_INDEX 15 +#define I40E_GLSCD_PRGPERFCONTROL_COUNTERTYPE_SHIFT 0 +#define I40E_GLSCD_PRGPERFCONTROL_COUNTERTYPE_MASK I40E_MASK(0x7, I40E_GLSCD_PRGPERFCONTROL_COUNTERTYPE_SHIFT) +#define I40E_GLSCD_PRGPERFCONTROL_RESOURCESELECT_SHIFT 4 +#define I40E_GLSCD_PRGPERFCONTROL_RESOURCESELECT_MASK I40E_MASK(0x3, I40E_GLSCD_PRGPERFCONTROL_RESOURCESELECT_SHIFT) +#define I40E_GLSCD_PRGPERFCONTROL_PORTINDEX_SHIFT 6 +#define I40E_GLSCD_PRGPERFCONTROL_PORTINDEX_MASK I40E_MASK(0x3, I40E_GLSCD_PRGPERFCONTROL_PORTINDEX_SHIFT) +#define I40E_GLSCD_PRGPERFCONTROL_TCINDEX_SHIFT 8 +#define I40E_GLSCD_PRGPERFCONTROL_TCINDEX_MASK I40E_MASK(0x7, I40E_GLSCD_PRGPERFCONTROL_TCINDEX_SHIFT) +#define I40E_GLSCD_PRGPERFCONTROL_QSINDEX_SHIFT 16 +#define I40E_GLSCD_PRGPERFCONTROL_QSINDEX_MASK I40E_MASK(0x3FF, I40E_GLSCD_PRGPERFCONTROL_QSINDEX_SHIFT) + +#define I40E_GLSCD_PRGPERFCOUNT(_i) (0x000B20BC + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSCD_PRGPERFCOUNT_MAX_INDEX 15 +#define I40E_GLSCD_PRGPERFCOUNT_PRGPERFCOUNT_SHIFT 0 +#define I40E_GLSCD_PRGPERFCOUNT_PRGPERFCOUNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_PRGPERFCOUNT_PRGPERFCOUNT_SHIFT) + +#define I40E_GLSCD_RAM_DBG_CTL(_i) (0x000B28c0 + ((_i) * 4)) /* _i=0...9 */ /* Reset: POR */ +#define I40E_GLSCD_RAM_DBG_CTL_MAX_INDEX 9 +#define I40E_GLSCD_RAM_DBG_CTL_ADR_SHIFT 0 +#define I40E_GLSCD_RAM_DBG_CTL_ADR_MASK I40E_MASK(0x3FFFF, I40E_GLSCD_RAM_DBG_CTL_ADR_SHIFT) +#define I40E_GLSCD_RAM_DBG_CTL_DW_SEL_SHIFT 18 +#define I40E_GLSCD_RAM_DBG_CTL_DW_SEL_MASK I40E_MASK(0xFF, I40E_GLSCD_RAM_DBG_CTL_DW_SEL_SHIFT) +#define I40E_GLSCD_RAM_DBG_CTL_RD_EN_SHIFT 30 +#define I40E_GLSCD_RAM_DBG_CTL_RD_EN_MASK I40E_MASK(0x1, I40E_GLSCD_RAM_DBG_CTL_RD_EN_SHIFT) +#define I40E_GLSCD_RAM_DBG_CTL_DONE_SHIFT 31 +#define I40E_GLSCD_RAM_DBG_CTL_DONE_MASK I40E_MASK(0x1, I40E_GLSCD_RAM_DBG_CTL_DONE_SHIFT) + +#define I40E_GLSCD_RAM_DBG_DATA(_i) (0x000b28e8 + ((_i) * 4)) /* _i=0...9 */ /* Reset: POR */ +#define I40E_GLSCD_RAM_DBG_DATA_MAX_INDEX 9 +#define I40E_GLSCD_RAM_DBG_DATA_GLSCD_RAM_DBG_DATA_SHIFT 0 +#define I40E_GLSCD_RAM_DBG_DATA_GLSCD_RAM_DBG_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_RAM_DBG_DATA_GLSCD_RAM_DBG_DATA_SHIFT) + +#define I40E_GLSCD_RLMTBLRD2CMD 0x000B2158 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRD2CMD_RLMTBLIDX_SHIFT 0 +#define I40E_GLSCD_RLMTBLRD2CMD_RLMTBLIDX_MASK I40E_MASK(0x3FF, I40E_GLSCD_RLMTBLRD2CMD_RLMTBLIDX_SHIFT) + +#define I40E_GLSCD_RLMTBLRD2DATAHI 0x000B2164 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRD2DATAHI_DATA_SHIFT 0 +#define I40E_GLSCD_RLMTBLRD2DATAHI_DATA_MASK I40E_MASK(0x7FFFFFF, I40E_GLSCD_RLMTBLRD2DATAHI_DATA_SHIFT) + +#define I40E_GLSCD_RLMTBLRD2DATALO 0x000B2160 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRD2DATALO_DATA_SHIFT 0 +#define I40E_GLSCD_RLMTBLRD2DATALO_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_RLMTBLRD2DATALO_DATA_SHIFT) + +#define I40E_GLSCD_RLMTBLRD2STATUS 0x000B215C /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRD2STATUS_VALID_SHIFT 0 +#define I40E_GLSCD_RLMTBLRD2STATUS_VALID_MASK I40E_MASK(0x1, I40E_GLSCD_RLMTBLRD2STATUS_VALID_SHIFT) +#define I40E_GLSCD_RLMTBLRD2STATUS_RSVD_SHIFT 1 +#define I40E_GLSCD_RLMTBLRD2STATUS_RSVD_MASK I40E_MASK(0x7FFFFFFF, I40E_GLSCD_RLMTBLRD2STATUS_RSVD_SHIFT) + +#define I40E_GLSCD_RLMTBLRDCMD 0x000B20AC /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRDCMD_RLMTBLIDX_SHIFT 0 +#define I40E_GLSCD_RLMTBLRDCMD_RLMTBLIDX_MASK I40E_MASK(0x3FF, I40E_GLSCD_RLMTBLRDCMD_RLMTBLIDX_SHIFT) + +#define I40E_GLSCD_RLMTBLRDDATAHI 0x000B20B8 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRDDATAHI_DATA_SHIFT 0 +#define I40E_GLSCD_RLMTBLRDDATAHI_DATA_MASK I40E_MASK(0x7FFFFFF, I40E_GLSCD_RLMTBLRDDATAHI_DATA_SHIFT) + +#define I40E_GLSCD_RLMTBLRDDATALO 0x000B20B4 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRDDATALO_DATA_SHIFT 0 +#define I40E_GLSCD_RLMTBLRDDATALO_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSCD_RLMTBLRDDATALO_DATA_SHIFT) + +#define I40E_GLSCD_RLMTBLRDSTATUS 0x000B20B0 /* Reset: CORER */ +#define I40E_GLSCD_RLMTBLRDSTATUS_VALID_SHIFT 0 +#define I40E_GLSCD_RLMTBLRDSTATUS_VALID_MASK I40E_MASK(0x1, I40E_GLSCD_RLMTBLRDSTATUS_VALID_SHIFT) +#define I40E_GLSCD_RLMTBLRDSTATUS_RSVD_SHIFT 1 +#define I40E_GLSCD_RLMTBLRDSTATUS_RSVD_MASK I40E_MASK(0x7FFFFFFF, I40E_GLSCD_RLMTBLRDSTATUS_RSVD_SHIFT) + +#define I40E_PFSCD_DEFQSETHNDL 0x000B2000 /* Reset: PFR */ +#define I40E_PFSCD_DEFQSETHNDL_DEFQSETHNDL_SHIFT 0 +#define I40E_PFSCD_DEFQSETHNDL_DEFQSETHNDL_MASK I40E_MASK(0xFFFF, I40E_PFSCD_DEFQSETHNDL_DEFQSETHNDL_SHIFT) + +/* PF - Virtualization PF Registers */ + +#define I40E_GL_MDCK_RX 0x0012A50C /* Reset: CORER */ +#define I40E_GL_MDCK_RX_DESC_ADDR_SHIFT 0 +#define I40E_GL_MDCK_RX_DESC_ADDR_MASK I40E_MASK(0x1, I40E_GL_MDCK_RX_DESC_ADDR_SHIFT) + +#define I40E_GL_MDCK_TCMD 0x000E648C /* Reset: CORER */ +#define I40E_GL_MDCK_TCMD_DESC_ADDR_SHIFT 0 +#define I40E_GL_MDCK_TCMD_DESC_ADDR_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_DESC_ADDR_SHIFT) +#define I40E_GL_MDCK_TCMD_MAX_BUFF_SHIFT 2 +#define I40E_GL_MDCK_TCMD_MAX_BUFF_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_MAX_BUFF_SHIFT) +#define I40E_GL_MDCK_TCMD_MAX_HEAD_SHIFT 3 +#define I40E_GL_MDCK_TCMD_MAX_HEAD_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_MAX_HEAD_SHIFT) +#define I40E_GL_MDCK_TCMD_NO_HEAD_SHIFT 4 +#define I40E_GL_MDCK_TCMD_NO_HEAD_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_NO_HEAD_SHIFT) +#define I40E_GL_MDCK_TCMD_TOO_LONG_SHIFT 5 +#define I40E_GL_MDCK_TCMD_TOO_LONG_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_TOO_LONG_SHIFT) +#define I40E_GL_MDCK_TCMD_SINGLE_TX_SIZE_SHIFT 6 +#define I40E_GL_MDCK_TCMD_SINGLE_TX_SIZE_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_SINGLE_TX_SIZE_SHIFT) +#define I40E_GL_MDCK_TCMD_ENDLESS_TX_SHIFT 7 +#define I40E_GL_MDCK_TCMD_ENDLESS_TX_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_ENDLESS_TX_SHIFT) +#define I40E_GL_MDCK_TCMD_BAD_LSO_LEN_SHIFT 8 +#define I40E_GL_MDCK_TCMD_BAD_LSO_LEN_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_BAD_LSO_LEN_SHIFT) +#define I40E_GL_MDCK_TCMD_BAD_LSO_MSS_SHIFT 9 +#define I40E_GL_MDCK_TCMD_BAD_LSO_MSS_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_BAD_LSO_MSS_SHIFT) +#define I40E_GL_MDCK_TCMD_M_CONTEXTS_SHIFT 12 +#define I40E_GL_MDCK_TCMD_M_CONTEXTS_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_M_CONTEXTS_SHIFT) +#define I40E_GL_MDCK_TCMD_BAD_DESC_SEQUENCE_SHIFT 13 +#define I40E_GL_MDCK_TCMD_BAD_DESC_SEQUENCE_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_BAD_DESC_SEQUENCE_SHIFT) +#define I40E_GL_MDCK_TCMD_BAD_FC_FD_DESC_SHIFT 14 +#define I40E_GL_MDCK_TCMD_BAD_FC_FD_DESC_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_BAD_FC_FD_DESC_SHIFT) +#define I40E_GL_MDCK_TCMD_NO_PACKET_SHIFT 15 +#define I40E_GL_MDCK_TCMD_NO_PACKET_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_NO_PACKET_SHIFT) +#define I40E_GL_MDCK_TCMD_DIS_DIF_DIX_SHIFT 16 +#define I40E_GL_MDCK_TCMD_DIS_DIF_DIX_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_DIS_DIF_DIX_SHIFT) +#define I40E_GL_MDCK_TCMD_DIS_FLEX_SHIFT 17 +#define I40E_GL_MDCK_TCMD_DIS_FLEX_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_DIS_FLEX_SHIFT) +#define I40E_GL_MDCK_TCMD_ZERO_BSIZE_SHIFT 18 +#define I40E_GL_MDCK_TCMD_ZERO_BSIZE_MASK I40E_MASK(0x1, I40E_GL_MDCK_TCMD_ZERO_BSIZE_SHIFT) + +#define I40E_GL_MDCK_TDAT 0x000442F4 /* Reset: CORER */ +#define I40E_GL_MDCK_TDAT_BIG_OFFSET_SHIFT 0 +#define I40E_GL_MDCK_TDAT_BIG_OFFSET_MASK I40E_MASK(0x1, I40E_GL_MDCK_TDAT_BIG_OFFSET_SHIFT) +#define I40E_GL_MDCK_TDAT_BUFF_ADDR_SHIFT 1 +#define I40E_GL_MDCK_TDAT_BUFF_ADDR_MASK I40E_MASK(0x1, I40E_GL_MDCK_TDAT_BUFF_ADDR_SHIFT) +#define I40E_GL_MDCK_TDAT_MAL_LENGTH_DIS_SHIFT 2 +#define I40E_GL_MDCK_TDAT_MAL_LENGTH_DIS_MASK I40E_MASK(0x1, I40E_GL_MDCK_TDAT_MAL_LENGTH_DIS_SHIFT) +#define I40E_GL_MDCK_TDAT_MAL_CMD_DIS_SHIFT 3 +#define I40E_GL_MDCK_TDAT_MAL_CMD_DIS_MASK I40E_MASK(0x1, I40E_GL_MDCK_TDAT_MAL_CMD_DIS_SHIFT) + +#define I40E_PF_VIRT_VSTATUS 0x0009C400 /* Reset: PFR */ +#define I40E_PF_VIRT_VSTATUS_NUM_VFS_SHIFT 0 +#define I40E_PF_VIRT_VSTATUS_NUM_VFS_MASK I40E_MASK(0xFF, I40E_PF_VIRT_VSTATUS_NUM_VFS_SHIFT) +#define I40E_PF_VIRT_VSTATUS_TOTAL_VFS_SHIFT 8 +#define I40E_PF_VIRT_VSTATUS_TOTAL_VFS_MASK I40E_MASK(0xFF, I40E_PF_VIRT_VSTATUS_TOTAL_VFS_SHIFT) +#define I40E_PF_VIRT_VSTATUS_IOV_ACTIVE_SHIFT 16 +#define I40E_PF_VIRT_VSTATUS_IOV_ACTIVE_MASK I40E_MASK(0x1, I40E_PF_VIRT_VSTATUS_IOV_ACTIVE_SHIFT) + +#define I40E_PF_VT_PFALLOC_CSR 0x00078D80 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_CSR_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_CSR_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_CSR_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_CSR_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_CSR_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_CSR_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_CSR_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_CSR_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_CSR_VALID_SHIFT) + +#define I40E_PF_VT_PFALLOC_INT 0x0003F080 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_INT_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_INT_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_INT_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_INT_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_INT_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_INT_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_INT_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_INT_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_INT_VALID_SHIFT) + +#define I40E_PF_VT_PFALLOC_PMAT 0x000C0680 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_PMAT_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_PMAT_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_PMAT_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_PMAT_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_PMAT_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_PMAT_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_PMAT_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_PMAT_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_PMAT_VALID_SHIFT) + +#define I40E_PF_VT_PFALLOC_TSCD 0x000B2280 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_TSCD_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_TSCD_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_TSCD_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_TSCD_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_TSCD_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_TSCD_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_TSCD_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_TSCD_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_TSCD_VALID_SHIFT) + +#define I40E_PF_VT_PFALLOC_VMLR 0x00092580 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_VMLR_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_VMLR_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_VMLR_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_VMLR_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_VMLR_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_VMLR_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_VMLR_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_VMLR_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_VMLR_VALID_SHIFT) + +/* PF - VSI Context */ + +#define I40E_VSI_L2TAGSTXVALID(_VSI) (0x00042800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_L2TAGSTXVALID_MAX_INDEX 383 +#define I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_SHIFT 0 +#define I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_MASK I40E_MASK(0x7, I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_SHIFT 3 +#define I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_MASK I40E_MASK(0x1, I40E_VSI_L2TAGSTXVALID_L2TAG1INSERTID_VALID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_SHIFT 4 +#define I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_MASK I40E_MASK(0x7, I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_SHIFT 7 +#define I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_MASK I40E_MASK(0x1, I40E_VSI_L2TAGSTXVALID_L2TAG2INSERTID_VALID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR0INSERTID_SHIFT 16 +#define I40E_VSI_L2TAGSTXVALID_TIR0INSERTID_MASK I40E_MASK(0x7, I40E_VSI_L2TAGSTXVALID_TIR0INSERTID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR0_INSERT_SHIFT 19 +#define I40E_VSI_L2TAGSTXVALID_TIR0_INSERT_MASK I40E_MASK(0x1, I40E_VSI_L2TAGSTXVALID_TIR0_INSERT_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR1INSERTID_SHIFT 20 +#define I40E_VSI_L2TAGSTXVALID_TIR1INSERTID_MASK I40E_MASK(0x7, I40E_VSI_L2TAGSTXVALID_TIR1INSERTID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR1_INSERT_SHIFT 23 +#define I40E_VSI_L2TAGSTXVALID_TIR1_INSERT_MASK I40E_MASK(0x1, I40E_VSI_L2TAGSTXVALID_TIR1_INSERT_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR2INSERTID_SHIFT 24 +#define I40E_VSI_L2TAGSTXVALID_TIR2INSERTID_MASK I40E_MASK(0x7, I40E_VSI_L2TAGSTXVALID_TIR2INSERTID_SHIFT) +#define I40E_VSI_L2TAGSTXVALID_TIR2_INSERT_SHIFT 27 +#define I40E_VSI_L2TAGSTXVALID_TIR2_INSERT_MASK I40E_MASK(0x1, I40E_VSI_L2TAGSTXVALID_TIR2_INSERT_SHIFT) + +#define I40E_VSI_PORT(_VSI) (0x000B22C0 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_PORT_MAX_INDEX 383 +#define I40E_VSI_PORT_PORT_NUM_SHIFT 0 +#define I40E_VSI_PORT_PORT_NUM_MASK I40E_MASK(0x3, I40E_VSI_PORT_PORT_NUM_SHIFT) + +#define I40E_VSI_RUPR(_VSI) (0x00050000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_RUPR_MAX_INDEX 383 +#define I40E_VSI_RUPR_UP0_SHIFT 0 +#define I40E_VSI_RUPR_UP0_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP0_SHIFT) +#define I40E_VSI_RUPR_UP1_SHIFT 3 +#define I40E_VSI_RUPR_UP1_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP1_SHIFT) +#define I40E_VSI_RUPR_UP2_SHIFT 6 +#define I40E_VSI_RUPR_UP2_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP2_SHIFT) +#define I40E_VSI_RUPR_UP3_SHIFT 9 +#define I40E_VSI_RUPR_UP3_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP3_SHIFT) +#define I40E_VSI_RUPR_UP4_SHIFT 12 +#define I40E_VSI_RUPR_UP4_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP4_SHIFT) +#define I40E_VSI_RUPR_UP5_SHIFT 15 +#define I40E_VSI_RUPR_UP5_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP5_SHIFT) +#define I40E_VSI_RUPR_UP6_SHIFT 18 +#define I40E_VSI_RUPR_UP6_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP6_SHIFT) +#define I40E_VSI_RUPR_UP7_SHIFT 21 +#define I40E_VSI_RUPR_UP7_MASK I40E_MASK(0x7, I40E_VSI_RUPR_UP7_SHIFT) + +#define I40E_VSI_RXSWCTRL(_VSI) (0x00208800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_RXSWCTRL_MAX_INDEX 383 +#define I40E_VSI_RXSWCTRL_MACVSIPRUNEENABLE_SHIFT 0 +#define I40E_VSI_RXSWCTRL_MACVSIPRUNEENABLE_MASK I40E_MASK(0x1, I40E_VSI_RXSWCTRL_MACVSIPRUNEENABLE_SHIFT) +#define I40E_VSI_RXSWCTRL_VLANPRUNEENABLE_SHIFT 1 +#define I40E_VSI_RXSWCTRL_VLANPRUNEENABLE_MASK I40E_MASK(0x1, I40E_VSI_RXSWCTRL_VLANPRUNEENABLE_SHIFT) + +#define I40E_VSI_SRCSWCTRL(_VSI) (0x00209800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_SRCSWCTRL_MAX_INDEX 383 +#define I40E_VSI_SRCSWCTRL_SWID_SHIFT 0 +#define I40E_VSI_SRCSWCTRL_SWID_MASK I40E_MASK(0xFFF, I40E_VSI_SRCSWCTRL_SWID_SHIFT) +#define I40E_VSI_SRCSWCTRL_ISNSTAG_SHIFT 12 +#define I40E_VSI_SRCSWCTRL_ISNSTAG_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_ISNSTAG_SHIFT) +#define I40E_VSI_SRCSWCTRL_SWIDVALID_SHIFT 17 +#define I40E_VSI_SRCSWCTRL_SWIDVALID_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_SWIDVALID_SHIFT) +#define I40E_VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_SHIFT 19 +#define I40E_VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_ALLOWDESTOVERRIDE_SHIFT) +#define I40E_VSI_SRCSWCTRL_ALLOWLOOPBACK_SHIFT 20 +#define I40E_VSI_SRCSWCTRL_ALLOWLOOPBACK_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_ALLOWLOOPBACK_SHIFT) +#define I40E_VSI_SRCSWCTRL_LANENABLE_SHIFT 21 +#define I40E_VSI_SRCSWCTRL_LANENABLE_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_LANENABLE_SHIFT) +#define I40E_VSI_SRCSWCTRL_VLANAS_SHIFT 22 +#define I40E_VSI_SRCSWCTRL_VLANAS_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_VLANAS_SHIFT) +#define I40E_VSI_SRCSWCTRL_MACAS_SHIFT 23 +#define I40E_VSI_SRCSWCTRL_MACAS_MASK I40E_MASK(0x1, I40E_VSI_SRCSWCTRL_MACAS_SHIFT) + +#define I40E_VSI_TAIR(_VSI) (0x00041800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TAIR_MAX_INDEX 383 +#define I40E_VSI_TAIR_PORT_TAG_ID_SHIFT 0 +#define I40E_VSI_TAIR_PORT_TAG_ID_MASK I40E_MASK(0xFFFF, I40E_VSI_TAIR_PORT_TAG_ID_SHIFT) + +#define I40E_VSI_TAR(_VSI) (0x00042000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TAR_MAX_INDEX 383 +#define I40E_VSI_TAR_ACCEPTTAGGED_SHIFT 0 +#define I40E_VSI_TAR_ACCEPTTAGGED_MASK I40E_MASK(0x3FF, I40E_VSI_TAR_ACCEPTTAGGED_SHIFT) +#define I40E_VSI_TAR_ACCEPTUNTAGGED_SHIFT 16 +#define I40E_VSI_TAR_ACCEPTUNTAGGED_MASK I40E_MASK(0x3FF, I40E_VSI_TAR_ACCEPTUNTAGGED_SHIFT) + +#define I40E_VSI_TIR_0(_VSI) (0x00040000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TIR_0_MAX_INDEX 383 +#define I40E_VSI_TIR_0_PORT_TAG_ID_SHIFT 0 +#define I40E_VSI_TIR_0_PORT_TAG_ID_MASK I40E_MASK(0xFFFF, I40E_VSI_TIR_0_PORT_TAG_ID_SHIFT) + +#define I40E_VSI_TIR_1(_VSI) (0x00040800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TIR_1_MAX_INDEX 383 +#define I40E_VSI_TIR_1_PORT_TAG_ID_SHIFT 0 +#define I40E_VSI_TIR_1_PORT_TAG_ID_MASK I40E_MASK(0xFFFFFFFF, I40E_VSI_TIR_1_PORT_TAG_ID_SHIFT) + +#define I40E_VSI_TIR_2(_VSI) (0x00041000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TIR_2_MAX_INDEX 383 +#define I40E_VSI_TIR_2_PORT_TAG_ID_SHIFT 0 +#define I40E_VSI_TIR_2_PORT_TAG_ID_MASK I40E_MASK(0xFFFF, I40E_VSI_TIR_2_PORT_TAG_ID_SHIFT) + +#define I40E_VSI_TSR(_VSI) (0x00050800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TSR_MAX_INDEX 383 +#define I40E_VSI_TSR_STRIPTAG_SHIFT 0 +#define I40E_VSI_TSR_STRIPTAG_MASK I40E_MASK(0x3FF, I40E_VSI_TSR_STRIPTAG_SHIFT) +#define I40E_VSI_TSR_SHOWTAG_SHIFT 10 +#define I40E_VSI_TSR_SHOWTAG_MASK I40E_MASK(0x3FF, I40E_VSI_TSR_SHOWTAG_SHIFT) +#define I40E_VSI_TSR_SHOWPRIONLY_SHIFT 20 +#define I40E_VSI_TSR_SHOWPRIONLY_MASK I40E_MASK(0x3FF, I40E_VSI_TSR_SHOWPRIONLY_SHIFT) + +#define I40E_VSI_TUPIOM(_VSI) (0x00043800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TUPIOM_MAX_INDEX 383 +#define I40E_VSI_TUPIOM_UP0_SHIFT 0 +#define I40E_VSI_TUPIOM_UP0_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP0_SHIFT) +#define I40E_VSI_TUPIOM_UP1_SHIFT 3 +#define I40E_VSI_TUPIOM_UP1_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP1_SHIFT) +#define I40E_VSI_TUPIOM_UP2_SHIFT 6 +#define I40E_VSI_TUPIOM_UP2_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP2_SHIFT) +#define I40E_VSI_TUPIOM_UP3_SHIFT 9 +#define I40E_VSI_TUPIOM_UP3_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP3_SHIFT) +#define I40E_VSI_TUPIOM_UP4_SHIFT 12 +#define I40E_VSI_TUPIOM_UP4_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP4_SHIFT) +#define I40E_VSI_TUPIOM_UP5_SHIFT 15 +#define I40E_VSI_TUPIOM_UP5_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP5_SHIFT) +#define I40E_VSI_TUPIOM_UP6_SHIFT 18 +#define I40E_VSI_TUPIOM_UP6_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP6_SHIFT) +#define I40E_VSI_TUPIOM_UP7_SHIFT 21 +#define I40E_VSI_TUPIOM_UP7_MASK I40E_MASK(0x7, I40E_VSI_TUPIOM_UP7_SHIFT) + +#define I40E_VSI_TUPR(_VSI) (0x00043000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_TUPR_MAX_INDEX 383 +#define I40E_VSI_TUPR_UP0_SHIFT 0 +#define I40E_VSI_TUPR_UP0_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP0_SHIFT) +#define I40E_VSI_TUPR_UP1_SHIFT 3 +#define I40E_VSI_TUPR_UP1_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP1_SHIFT) +#define I40E_VSI_TUPR_UP2_SHIFT 6 +#define I40E_VSI_TUPR_UP2_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP2_SHIFT) +#define I40E_VSI_TUPR_UP3_SHIFT 9 +#define I40E_VSI_TUPR_UP3_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP3_SHIFT) +#define I40E_VSI_TUPR_UP4_SHIFT 12 +#define I40E_VSI_TUPR_UP4_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP4_SHIFT) +#define I40E_VSI_TUPR_UP5_SHIFT 15 +#define I40E_VSI_TUPR_UP5_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP5_SHIFT) +#define I40E_VSI_TUPR_UP6_SHIFT 18 +#define I40E_VSI_TUPR_UP6_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP6_SHIFT) +#define I40E_VSI_TUPR_UP7_SHIFT 21 +#define I40E_VSI_TUPR_UP7_MASK I40E_MASK(0x7, I40E_VSI_TUPR_UP7_SHIFT) + +#define I40E_VSI_VSI2F(_VSI) (0x0020B800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSI_VSI2F_MAX_INDEX 383 +#define I40E_VSI_VSI2F_VFVMNUMBER_SHIFT 0 +#define I40E_VSI_VSI2F_VFVMNUMBER_MASK I40E_MASK(0x3FF, I40E_VSI_VSI2F_VFVMNUMBER_SHIFT) +#define I40E_VSI_VSI2F_PFNUMBER_SHIFT 10 +#define I40E_VSI_VSI2F_PFNUMBER_MASK I40E_MASK(0xF, I40E_VSI_VSI2F_PFNUMBER_SHIFT) +#define I40E_VSI_VSI2F_FUNCTIONTYPE_SHIFT 14 +#define I40E_VSI_VSI2F_FUNCTIONTYPE_MASK I40E_MASK(0x3, I40E_VSI_VSI2F_FUNCTIONTYPE_SHIFT) +#define I40E_VSI_VSI2F_BUFFERNUMBER_SHIFT 16 +#define I40E_VSI_VSI2F_BUFFERNUMBER_MASK I40E_MASK(0x7, I40E_VSI_VSI2F_BUFFERNUMBER_SHIFT) +#define I40E_VSI_VSI2F_RESERVED_5_SHIFT 19 +#define I40E_VSI_VSI2F_RESERVED_5_MASK I40E_MASK(0x7, I40E_VSI_VSI2F_RESERVED_5_SHIFT) +#define I40E_VSI_VSI2F_VSI_ENABLE_SHIFT 22 +#define I40E_VSI_VSI2F_VSI_ENABLE_MASK I40E_MASK(0x1, I40E_VSI_VSI2F_VSI_ENABLE_SHIFT) +#define I40E_VSI_VSI2F_VSI_NUMBER_SHIFT 23 +#define I40E_VSI_VSI2F_VSI_NUMBER_MASK I40E_MASK(0x1FF, I40E_VSI_VSI2F_VSI_NUMBER_SHIFT) + +/* PF - Wake-Up and Proxying Registers */ + +#define I40E_PFPM_FHFT_DATA(_i, _j) (0x00060000 + ((_i) * 4096 + (_j) * 128)) /* _i=0...7, _j=0...31 */ /* Reset: POR */ +#define I40E_PFPM_FHFT_DATA_MAX_INDEX 7 +#define I40E_PFPM_FHFT_DATA_DWORD_SHIFT 0 +#define I40E_PFPM_FHFT_DATA_DWORD_MASK I40E_MASK(0xFFFFFFFF, I40E_PFPM_FHFT_DATA_DWORD_SHIFT) + +#define I40E_PFPM_FHFT_MASK(_i, _j) (0x00068000 + ((_i) * 1024 + (_j) * 128)) /* _i=0...7, _j=0...7 */ /* Reset: POR */ +#define I40E_PFPM_FHFT_MASK_MAX_INDEX 7 +#define I40E_PFPM_FHFT_MASK_MASK_SHIFT 0 +#define I40E_PFPM_FHFT_MASK_MASK_MASK I40E_MASK(0xFFFF, I40E_PFPM_FHFT_MASK_MASK_SHIFT) + +#define I40E_PFPM_PROXYFC 0x00245A80 /* Reset: POR */ +#define I40E_PFPM_PROXYFC_PPROXYE_SHIFT 0 +#define I40E_PFPM_PROXYFC_PPROXYE_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_PPROXYE_SHIFT) +#define I40E_PFPM_PROXYFC_EX_SHIFT 1 +#define I40E_PFPM_PROXYFC_EX_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_EX_SHIFT) +#define I40E_PFPM_PROXYFC_ARP_SHIFT 4 +#define I40E_PFPM_PROXYFC_ARP_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_ARP_SHIFT) +#define I40E_PFPM_PROXYFC_ARP_DIRECTED_SHIFT 5 +#define I40E_PFPM_PROXYFC_ARP_DIRECTED_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_ARP_DIRECTED_SHIFT) +#define I40E_PFPM_PROXYFC_NS_SHIFT 9 +#define I40E_PFPM_PROXYFC_NS_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_NS_SHIFT) +#define I40E_PFPM_PROXYFC_NS_DIRECTED_SHIFT 10 +#define I40E_PFPM_PROXYFC_NS_DIRECTED_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_NS_DIRECTED_SHIFT) +#define I40E_PFPM_PROXYFC_MLD_SHIFT 12 +#define I40E_PFPM_PROXYFC_MLD_MASK I40E_MASK(0x1, I40E_PFPM_PROXYFC_MLD_SHIFT) + +#define I40E_PFPM_PROXYS 0x00245B80 /* Reset: POR */ +#define I40E_PFPM_PROXYS_EX_SHIFT 1 +#define I40E_PFPM_PROXYS_EX_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_EX_SHIFT) +#define I40E_PFPM_PROXYS_ARP_SHIFT 4 +#define I40E_PFPM_PROXYS_ARP_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_ARP_SHIFT) +#define I40E_PFPM_PROXYS_ARP_DIRECTED_SHIFT 5 +#define I40E_PFPM_PROXYS_ARP_DIRECTED_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_ARP_DIRECTED_SHIFT) +#define I40E_PFPM_PROXYS_NS_SHIFT 9 +#define I40E_PFPM_PROXYS_NS_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_NS_SHIFT) +#define I40E_PFPM_PROXYS_NS_DIRECTED_SHIFT 10 +#define I40E_PFPM_PROXYS_NS_DIRECTED_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_NS_DIRECTED_SHIFT) +#define I40E_PFPM_PROXYS_MLD_SHIFT 12 +#define I40E_PFPM_PROXYS_MLD_MASK I40E_MASK(0x1, I40E_PFPM_PROXYS_MLD_SHIFT) + +/* VF - Admin Queue */ + +/* VF - General Registers */ + +/* VF - Interrupts */ + +#define I40E_VFINT_ITR0_STAT1(_i) (0x00004400 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */ +#define I40E_VFINT_ITR0_STAT1_MAX_INDEX 2 +#define I40E_VFINT_ITR0_STAT1_ITR_EXPIRE_SHIFT 0 +#define I40E_VFINT_ITR0_STAT1_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_VFINT_ITR0_STAT1_ITR_EXPIRE_SHIFT) +#define I40E_VFINT_ITR0_STAT1_EVENT_SHIFT 1 +#define I40E_VFINT_ITR0_STAT1_EVENT_MASK I40E_MASK(0x1, I40E_VFINT_ITR0_STAT1_EVENT_SHIFT) +#define I40E_VFINT_ITR0_STAT1_ITR_TIME_SHIFT 2 +#define I40E_VFINT_ITR0_STAT1_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR0_STAT1_ITR_TIME_SHIFT) + +#define I40E_VFINT_ITRN_STAT1(_i, _INTVF) (0x00003000 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */ +#define I40E_VFINT_ITRN_STAT1_MAX_INDEX 2 +#define I40E_VFINT_ITRN_STAT1_ITR_EXPIRE_SHIFT 0 +#define I40E_VFINT_ITRN_STAT1_ITR_EXPIRE_MASK I40E_MASK(0x1, I40E_VFINT_ITRN_STAT1_ITR_EXPIRE_SHIFT) +#define I40E_VFINT_ITRN_STAT1_EVENT_SHIFT 1 +#define I40E_VFINT_ITRN_STAT1_EVENT_MASK I40E_MASK(0x1, I40E_VFINT_ITRN_STAT1_EVENT_SHIFT) +#define I40E_VFINT_ITRN_STAT1_ITR_TIME_SHIFT 2 +#define I40E_VFINT_ITRN_STAT1_ITR_TIME_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN_STAT1_ITR_TIME_SHIFT) + +#define I40E_VFINT_RATE0_STAT1 0x00005800 /* Reset: VFR */ +#define I40E_VFINT_RATE0_STAT1_CREDIT_SHIFT 0 +#define I40E_VFINT_RATE0_STAT1_CREDIT_MASK I40E_MASK(0xF, I40E_VFINT_RATE0_STAT1_CREDIT_SHIFT) +#define I40E_VFINT_RATE0_STAT1_INTRL_TIME_SHIFT 4 +#define I40E_VFINT_RATE0_STAT1_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_VFINT_RATE0_STAT1_INTRL_TIME_SHIFT) + +#define I40E_VFINT_RATEN_STAT1(_INTVF) (0x00004000 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ +#define I40E_VFINT_RATEN_STAT1_MAX_INDEX 15 +#define I40E_VFINT_RATEN_STAT1_CREDIT_SHIFT 0 +#define I40E_VFINT_RATEN_STAT1_CREDIT_MASK I40E_MASK(0xF, I40E_VFINT_RATEN_STAT1_CREDIT_SHIFT) +#define I40E_VFINT_RATEN_STAT1_INTRL_TIME_SHIFT 4 +#define I40E_VFINT_RATEN_STAT1_INTRL_TIME_MASK I40E_MASK(0x3F, I40E_VFINT_RATEN_STAT1_INTRL_TIME_SHIFT) + +/* VF - LAN Transmit Receive Registers */ + +/* VF - MSI-X Table Registers */ + +/* VF - PE Registers */ + +/* VF - Rx Filters Registers */ + +#define I40E_VPQF_DDPCNT 0x0000C800 /* Reset: CORER */ +#define I40E_VPQF_DDPCNT_DDP_CNT_SHIFT 0 +#define I40E_VPQF_DDPCNT_DDP_CNT_MASK I40E_MASK(0x1FFF, I40E_VPQF_DDPCNT_DDP_CNT_SHIFT) + +/* VF - Time Sync Registers */ + +#define I40E_PRTTSYN_VFTIME_H1 0x0000E020 /* Reset: GLOBR */ +#define I40E_PRTTSYN_VFTIME_H1_TSYNTIME_H_SHIFT 0 +#define I40E_PRTTSYN_VFTIME_H1_TSYNTIME_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_VFTIME_H1_TSYNTIME_H_SHIFT) + +#define I40E_PRTTSYN_VFTIME_L1 0x0000E000 /* Reset: GLOBR */ +#define I40E_PRTTSYN_VFTIME_L1_TSYNTIME_L_SHIFT 0 +#define I40E_PRTTSYN_VFTIME_L1_TSYNTIME_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_VFTIME_L1_TSYNTIME_L_SHIFT) + +/* Used in A0 code flow */ +#define I40E_GLHMC_PEXFMAX 0x000C2048 +#define I40E_GLHMC_PEXFMAX_PMPEXFMAX_SHIFT 0 +#define I40E_GLHMC_PEXFMAX_PMPEXFMAX_MASK (0x3FFFFFF << I40E_GLHMC_PEXFMAX_PMPEXFMAX_SHIFT) +#endif diff --git a/lib/librte_pmd_i40e/i40e/i40e_status.h b/lib/librte_pmd_i40e/i40e/i40e_status.h new file mode 100644 index 0000000..2e693a3 --- /dev/null +++ b/lib/librte_pmd_i40e/i40e/i40e_status.h @@ -0,0 +1,107 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_STATUS_H_ +#define _I40E_STATUS_H_ + +/* Error Codes */ +enum i40e_status_code { + I40E_SUCCESS = 0, + I40E_ERR_NVM = -1, + I40E_ERR_NVM_CHECKSUM = -2, + I40E_ERR_PHY = -3, + I40E_ERR_CONFIG = -4, + I40E_ERR_PARAM = -5, + I40E_ERR_MAC_TYPE = -6, + I40E_ERR_UNKNOWN_PHY = -7, + I40E_ERR_LINK_SETUP = -8, + I40E_ERR_ADAPTER_STOPPED = -9, + I40E_ERR_INVALID_MAC_ADDR = -10, + I40E_ERR_DEVICE_NOT_SUPPORTED = -11, + I40E_ERR_MASTER_REQUESTS_PENDING = -12, + I40E_ERR_INVALID_LINK_SETTINGS = -13, + I40E_ERR_AUTONEG_NOT_COMPLETE = -14, + I40E_ERR_RESET_FAILED = -15, + I40E_ERR_SWFW_SYNC = -16, + I40E_ERR_NO_AVAILABLE_VSI = -17, + I40E_ERR_NO_MEMORY = -18, + I40E_ERR_BAD_PTR = -19, + I40E_ERR_RING_FULL = -20, + I40E_ERR_INVALID_PD_ID = -21, + I40E_ERR_INVALID_QP_ID = -22, + I40E_ERR_INVALID_CQ_ID = -23, + I40E_ERR_INVALID_CEQ_ID = -24, + I40E_ERR_INVALID_AEQ_ID = -25, + I40E_ERR_INVALID_SIZE = -26, + I40E_ERR_INVALID_ARP_INDEX = -27, + I40E_ERR_INVALID_FPM_FUNC_ID = -28, + I40E_ERR_QP_INVALID_MSG_SIZE = -29, + I40E_ERR_QP_TOOMANY_WRS_POSTED = -30, + I40E_ERR_INVALID_FRAG_COUNT = -31, + I40E_ERR_QUEUE_EMPTY = -32, + I40E_ERR_INVALID_ALIGNMENT = -33, + I40E_ERR_FLUSHED_QUEUE = -34, + I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35, + I40E_ERR_INVALID_IMM_DATA_SIZE = -36, + I40E_ERR_TIMEOUT = -37, + I40E_ERR_OPCODE_MISMATCH = -38, + I40E_ERR_CQP_COMPL_ERROR = -39, + I40E_ERR_INVALID_VF_ID = -40, + I40E_ERR_INVALID_HMCFN_ID = -41, + I40E_ERR_BACKING_PAGE_ERROR = -42, + I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43, + I40E_ERR_INVALID_PBLE_INDEX = -44, + I40E_ERR_INVALID_SD_INDEX = -45, + I40E_ERR_INVALID_PAGE_DESC_INDEX = -46, + I40E_ERR_INVALID_SD_TYPE = -47, + I40E_ERR_MEMCPY_FAILED = -48, + I40E_ERR_INVALID_HMC_OBJ_INDEX = -49, + I40E_ERR_INVALID_HMC_OBJ_COUNT = -50, + I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51, + I40E_ERR_SRQ_ENABLED = -52, + I40E_ERR_ADMIN_QUEUE_ERROR = -53, + I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54, + I40E_ERR_BUF_TOO_SHORT = -55, + I40E_ERR_ADMIN_QUEUE_FULL = -56, + I40E_ERR_ADMIN_QUEUE_NO_WORK = -57, + I40E_ERR_BAD_IWARP_CQE = -58, + I40E_ERR_NVM_BLANK_MODE = -59, + I40E_ERR_NOT_IMPLEMENTED = -60, + I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61, + I40E_ERR_DIAG_TEST_FAILED = -62, + I40E_ERR_NOT_READY = -63, + I40E_NOT_SUPPORTED = -64, + I40E_ERR_FIRMWARE_API_VERSION = -65, +}; + +#endif /* _I40E_STATUS_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_type.h b/lib/librte_pmd_i40e/i40e/i40e_type.h new file mode 100644 index 0000000..aca8102 --- /dev/null +++ b/lib/librte_pmd_i40e/i40e/i40e_type.h @@ -0,0 +1,1462 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_TYPE_H_ +#define _I40E_TYPE_H_ + +#include "i40e_status.h" +#include "i40e_osdep.h" +#include "i40e_register.h" +#include "i40e_adminq.h" +#include "i40e_hmc.h" +#include "i40e_lan_hmc.h" + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p); +#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q); +#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r); +#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s); +#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t); + +/* Vendor ID */ +#define I40E_INTEL_VENDOR_ID 0x8086 + +/* Device IDs */ +#define I40E_DEV_ID_SFP_XL710 0x1572 +#define I40E_DEV_ID_QEMU 0x1574 +#define I40E_DEV_ID_KX_A 0x157F +#define I40E_DEV_ID_KX_B 0x1580 +#define I40E_DEV_ID_KX_C 0x1581 +#define I40E_DEV_ID_QSFP_A 0x1583 +#define I40E_DEV_ID_QSFP_B 0x1584 +#define I40E_DEV_ID_QSFP_C 0x1585 +#define I40E_DEV_ID_VF 0x154C +#define I40E_DEV_ID_VF_HV 0x1571 + +#define i40e_is_40G_device(d) ((d) == I40E_DEV_ID_QSFP_A || \ + (d) == I40E_DEV_ID_QSFP_B || \ + (d) == I40E_DEV_ID_QSFP_C) + +/* I40E_MASK is a macro used on 32 bit registers */ +#define I40E_MASK(mask, shift) (mask << shift) + +#define I40E_MAX_PF 16 +#define I40E_MAX_PF_VSI 64 +#define I40E_MAX_PF_QP 128 +#define I40E_MAX_VSI_QP 16 +#define I40E_MAX_VF_VSI 3 +#define I40E_MAX_CHAINED_RX_BUFFERS 5 +#define I40E_MAX_PF_UDP_OFFLOAD_PORTS 16 + +/* something less than 1 minute */ +#define I40E_HEARTBEAT_TIMEOUT (HZ * 50) + +/* Max default timeout in ms, */ +#define I40E_MAX_NVM_TIMEOUT 18000 + +/* Check whether address is multicast. */ +#define I40E_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01)) + +/* Check whether an address is broadcast. */ +#define I40E_IS_BROADCAST(address) \ + ((((u8 *)(address))[0] == ((u8)0xff)) && \ + (((u8 *)(address))[1] == ((u8)0xff))) + +/* Switch from ms to the 1usec global time (this is the GTIME resolution) */ +#define I40E_MS_TO_GTIME(time) ((time) * 1000) + +/* forward declaration */ +struct i40e_hw; +typedef void (*I40E_ADMINQ_CALLBACK)(struct i40e_hw *, struct i40e_aq_desc *); + +#define I40E_ETH_LENGTH_OF_ADDRESS 6 +/* Data type manipulation macros. */ +#define I40E_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) +#define I40E_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) + +#define I40E_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) +#define I40E_LO_WORD(x) ((u16)((x) & 0xFFFF)) + +#define I40E_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) +#define I40E_LO_BYTE(x) ((u8)((x) & 0xFF)) + +/* Number of Transmit Descriptors must be a multiple of 8. */ +#define I40E_REQ_TX_DESCRIPTOR_MULTIPLE 8 +/* Number of Receive Descriptors must be a multiple of 32 if + * the number of descriptors is greater than 32. + */ +#define I40E_REQ_RX_DESCRIPTOR_MULTIPLE 32 + +#define I40E_DESC_UNUSED(R) \ + ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ + (R)->next_to_clean - (R)->next_to_use - 1) + +/* bitfields for Tx queue mapping in QTX_CTL */ +#define I40E_QTX_CTL_VF_QUEUE 0x0 +#define I40E_QTX_CTL_VM_QUEUE 0x1 +#define I40E_QTX_CTL_PF_QUEUE 0x2 + +/* debug masks - set these bits in hw->debug_mask to control output */ +enum i40e_debug_mask { + I40E_DEBUG_INIT = 0x00000001, + I40E_DEBUG_RELEASE = 0x00000002, + + I40E_DEBUG_LINK = 0x00000010, + I40E_DEBUG_PHY = 0x00000020, + I40E_DEBUG_HMC = 0x00000040, + I40E_DEBUG_NVM = 0x00000080, + I40E_DEBUG_LAN = 0x00000100, + I40E_DEBUG_FLOW = 0x00000200, + I40E_DEBUG_DCB = 0x00000400, + I40E_DEBUG_DIAG = 0x00000800, + I40E_DEBUG_FD = 0x00001000, + + I40E_DEBUG_AQ_MESSAGE = 0x01000000, + I40E_DEBUG_AQ_DESCRIPTOR = 0x02000000, + I40E_DEBUG_AQ_DESC_BUFFER = 0x04000000, + I40E_DEBUG_AQ_COMMAND = 0x06000000, + I40E_DEBUG_AQ = 0x0F000000, + + I40E_DEBUG_USER = 0xF0000000, + + I40E_DEBUG_ALL = 0xFFFFFFFF +}; + +/* PCI Bus Info */ +#define I40E_PCI_LINK_STATUS 0xB2 +#define I40E_PCI_LINK_WIDTH 0x3F0 +#define I40E_PCI_LINK_WIDTH_1 0x10 +#define I40E_PCI_LINK_WIDTH_2 0x20 +#define I40E_PCI_LINK_WIDTH_4 0x40 +#define I40E_PCI_LINK_WIDTH_8 0x80 +#define I40E_PCI_LINK_SPEED 0xF +#define I40E_PCI_LINK_SPEED_2500 0x1 +#define I40E_PCI_LINK_SPEED_5000 0x2 +#define I40E_PCI_LINK_SPEED_8000 0x3 + +/* Memory types */ +enum i40e_memset_type { + I40E_NONDMA_MEM = 0, + I40E_DMA_MEM +}; + +/* Memcpy types */ +enum i40e_memcpy_type { + I40E_NONDMA_TO_NONDMA = 0, + I40E_NONDMA_TO_DMA, + I40E_DMA_TO_DMA, + I40E_DMA_TO_NONDMA +}; + +/* These are structs for managing the hardware information and the operations. + * The structures of function pointers are filled out at init time when we + * know for sure exactly which hardware we're working with. This gives us the + * flexibility of using the same main driver code but adapting to slightly + * different hardware needs as new parts are developed. For this architecture, + * the Firmware and AdminQ are intended to insulate the driver from most of the + * future changes, but these structures will also do part of the job. + */ +enum i40e_mac_type { + I40E_MAC_UNKNOWN = 0, + I40E_MAC_X710, + I40E_MAC_XL710, + I40E_MAC_VF, + I40E_MAC_GENERIC, +}; + +enum i40e_media_type { + I40E_MEDIA_TYPE_UNKNOWN = 0, + I40E_MEDIA_TYPE_FIBER, + I40E_MEDIA_TYPE_BASET, + I40E_MEDIA_TYPE_BACKPLANE, + I40E_MEDIA_TYPE_CX4, + I40E_MEDIA_TYPE_DA, + I40E_MEDIA_TYPE_VIRTUAL +}; + +enum i40e_fc_mode { + I40E_FC_NONE = 0, + I40E_FC_RX_PAUSE, + I40E_FC_TX_PAUSE, + I40E_FC_FULL, + I40E_FC_PFC, + I40E_FC_DEFAULT +}; + +enum i40e_set_fc_aq_failures { + I40E_SET_FC_AQ_FAIL_NONE = 0, + I40E_SET_FC_AQ_FAIL_GET1 = 1, + I40E_SET_FC_AQ_FAIL_SET = 2, + I40E_SET_FC_AQ_FAIL_GET2 = 4, + I40E_SET_FC_AQ_FAIL_SET_GET = 6 +}; + +enum i40e_vsi_type { + I40E_VSI_MAIN = 0, + I40E_VSI_VMDQ1, + I40E_VSI_VMDQ2, + I40E_VSI_CTRL, + I40E_VSI_FCOE, + I40E_VSI_MIRROR, + I40E_VSI_SRIOV, + I40E_VSI_FDIR, + I40E_VSI_TYPE_UNKNOWN +}; + +enum i40e_queue_type { + I40E_QUEUE_TYPE_RX = 0, + I40E_QUEUE_TYPE_TX, + I40E_QUEUE_TYPE_PE_CEQ, + I40E_QUEUE_TYPE_UNKNOWN +}; + +struct i40e_link_status { + enum i40e_aq_phy_type phy_type; + enum i40e_aq_link_speed link_speed; + u8 link_info; + u8 an_info; + u8 ext_info; + u8 loopback; + bool an_enabled; + /* is Link Status Event notification to SW enabled */ + bool lse_enable; + u16 max_frame_size; + bool crc_enable; + u8 pacing; +}; + +struct i40e_phy_info { + struct i40e_link_status link_info; + struct i40e_link_status link_info_old; + u32 autoneg_advertised; + u32 phy_id; + u32 module_type; + bool get_link_info; + enum i40e_media_type media_type; +}; + +#define I40E_HW_CAP_MAX_GPIO 30 +#define I40E_HW_CAP_MDIO_PORT_MODE_MDIO 0 +#define I40E_HW_CAP_MDIO_PORT_MODE_I2C 1 + +/* Capabilities of a PF or a VF or the whole device */ +struct i40e_hw_capabilities { + u32 switch_mode; +#define I40E_NVM_IMAGE_TYPE_EVB 0x0 +#define I40E_NVM_IMAGE_TYPE_CLOUD 0x2 +#define I40E_NVM_IMAGE_TYPE_UDP_CLOUD 0x3 + + u32 management_mode; + u32 npar_enable; + u32 os2bmc; + u32 valid_functions; + bool sr_iov_1_1; + bool vmdq; + bool evb_802_1_qbg; /* Edge Virtual Bridging */ + bool evb_802_1_qbh; /* Bridge Port Extension */ + bool dcb; + bool fcoe; + bool mfp_mode_1; + bool mgmt_cem; + bool ieee_1588; + bool iwarp; + bool fd; + u32 fd_filters_guaranteed; + u32 fd_filters_best_effort; + bool rss; + u32 rss_table_size; + u32 rss_table_entry_width; + bool led[I40E_HW_CAP_MAX_GPIO]; + bool sdp[I40E_HW_CAP_MAX_GPIO]; + u32 nvm_image_type; + u32 num_flow_director_filters; + u32 num_vfs; + u32 vf_base_id; + u32 num_vsis; + u32 num_rx_qp; + u32 num_tx_qp; + u32 base_queue; + u32 num_msix_vectors; + u32 num_msix_vectors_vf; + u32 led_pin_num; + u32 sdp_pin_num; + u32 mdio_port_num; + u32 mdio_port_mode; + u8 rx_buf_chain_len; + u32 enabled_tcmap; + u32 maxtc; +}; + +struct i40e_mac_info { + enum i40e_mac_type type; + u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 perm_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 san_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 port_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u16 max_fcoeq; +}; + +enum i40e_aq_resources_ids { + I40E_NVM_RESOURCE_ID = 1 +}; + +enum i40e_aq_resource_access_type { + I40E_RESOURCE_READ = 1, + I40E_RESOURCE_WRITE +}; + +struct i40e_nvm_info { + u64 hw_semaphore_timeout; /* 2usec global time (GTIME resolution) */ + u64 hw_semaphore_wait; /* - || - */ + u32 timeout; /* [ms] */ + u16 sr_size; /* Shadow RAM size in words */ + bool blank_nvm_mode; /* is NVM empty (no FW present)*/ + u16 version; /* NVM package version */ + u32 eetrack; /* NVM data version */ +}; + +/* definitions used in NVM update support */ + +enum i40e_nvmupd_cmd { + I40E_NVMUPD_INVALID, + I40E_NVMUPD_READ_CON, + I40E_NVMUPD_READ_SNT, + I40E_NVMUPD_READ_LCB, + I40E_NVMUPD_READ_SA, + I40E_NVMUPD_WRITE_ERA, + I40E_NVMUPD_WRITE_CON, + I40E_NVMUPD_WRITE_SNT, + I40E_NVMUPD_WRITE_LCB, + I40E_NVMUPD_WRITE_SA, + I40E_NVMUPD_CSUM_CON, + I40E_NVMUPD_CSUM_SA, + I40E_NVMUPD_CSUM_LCB, +}; + +enum i40e_nvmupd_state { + I40E_NVMUPD_STATE_INIT, + I40E_NVMUPD_STATE_READING, + I40E_NVMUPD_STATE_WRITING +}; + +/* nvm_access definition and its masks/shifts need to be accessible to + * application, core driver, and shared code. Where is the right file? + */ +#define I40E_NVM_READ 0xB +#define I40E_NVM_WRITE 0xC + +#define I40E_NVM_MOD_PNT_MASK 0xFF + +#define I40E_NVM_TRANS_SHIFT 8 +#define I40E_NVM_TRANS_MASK (0xf << I40E_NVM_TRANS_SHIFT) +#define I40E_NVM_CON 0x0 +#define I40E_NVM_SNT 0x1 +#define I40E_NVM_LCB 0x2 +#define I40E_NVM_SA (I40E_NVM_SNT | I40E_NVM_LCB) +#define I40E_NVM_ERA 0x4 +#define I40E_NVM_CSUM 0x8 + +#define I40E_NVM_ADAPT_SHIFT 16 +#define I40E_NVM_ADAPT_MASK (0xffffULL << I40E_NVM_ADAPT_SHIFT) + +#define I40E_NVMUPD_MAX_DATA 4096 +#define I40E_NVMUPD_IFACE_TIMEOUT 2 /* seconds */ + +struct i40e_nvm_access { + u32 command; + u32 config; + u32 offset; /* in bytes */ + u32 data_size; /* in bytes */ + u8 data[1]; +}; + +/* PCI bus types */ +enum i40e_bus_type { + i40e_bus_type_unknown = 0, + i40e_bus_type_pci, + i40e_bus_type_pcix, + i40e_bus_type_pci_express, + i40e_bus_type_reserved +}; + +/* PCI bus speeds */ +enum i40e_bus_speed { + i40e_bus_speed_unknown = 0, + i40e_bus_speed_33 = 33, + i40e_bus_speed_66 = 66, + i40e_bus_speed_100 = 100, + i40e_bus_speed_120 = 120, + i40e_bus_speed_133 = 133, + i40e_bus_speed_2500 = 2500, + i40e_bus_speed_5000 = 5000, + i40e_bus_speed_8000 = 8000, + i40e_bus_speed_reserved +}; + +/* PCI bus widths */ +enum i40e_bus_width { + i40e_bus_width_unknown = 0, + i40e_bus_width_pcie_x1 = 1, + i40e_bus_width_pcie_x2 = 2, + i40e_bus_width_pcie_x4 = 4, + i40e_bus_width_pcie_x8 = 8, + i40e_bus_width_32 = 32, + i40e_bus_width_64 = 64, + i40e_bus_width_reserved +}; + +/* Bus parameters */ +struct i40e_bus_info { + enum i40e_bus_speed speed; + enum i40e_bus_width width; + enum i40e_bus_type type; + + u16 func; + u16 device; + u16 lan_id; +}; + +/* Flow control (FC) parameters */ +struct i40e_fc_info { + enum i40e_fc_mode current_mode; /* FC mode in effect */ + enum i40e_fc_mode requested_mode; /* FC mode requested by caller */ +}; + +#define I40E_MAX_TRAFFIC_CLASS 8 +#define I40E_MAX_USER_PRIORITY 8 +#define I40E_DCBX_MAX_APPS 32 +#define I40E_LLDPDU_SIZE 1500 + +/* IEEE 802.1Qaz ETS Configuration data */ +struct i40e_ieee_ets_config { + u8 willing; + u8 cbs; + u8 maxtcs; + u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; + u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; + u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; +}; + +/* IEEE 802.1Qaz ETS Recommendation data */ +struct i40e_ieee_ets_recommend { + u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; + u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; + u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; +}; + +/* IEEE 802.1Qaz PFC Configuration data */ +struct i40e_ieee_pfc_config { + u8 willing; + u8 mbc; + u8 pfccap; + u8 pfcenable; +}; + +/* IEEE 802.1Qaz Application Priority data */ +struct i40e_ieee_app_priority_table { + u8 priority; + u8 selector; + u16 protocolid; +}; + +struct i40e_dcbx_config { + u32 numapps; + struct i40e_ieee_ets_config etscfg; + struct i40e_ieee_ets_recommend etsrec; + struct i40e_ieee_pfc_config pfc; + struct i40e_ieee_app_priority_table app[I40E_DCBX_MAX_APPS]; +}; + +/* Port hardware description */ +struct i40e_hw { + u8 *hw_addr; + void *back; + + /* function pointer structs */ + struct i40e_phy_info phy; + struct i40e_mac_info mac; + struct i40e_bus_info bus; + struct i40e_nvm_info nvm; + struct i40e_fc_info fc; + + /* pci info */ + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; + u8 port; + bool adapter_stopped; + + /* capabilities for entire device and PCI func */ + struct i40e_hw_capabilities dev_caps; + struct i40e_hw_capabilities func_caps; + + /* Flow Director shared filter space */ + u16 fdir_shared_filter_count; + + /* device profile info */ + u8 pf_id; + u16 main_vsi_seid; + + /* Closest numa node to the device */ + u16 numa_node; + + /* Admin Queue info */ + struct i40e_adminq_info aq; + + /* state of nvm update process */ + enum i40e_nvmupd_state nvmupd_state; + + /* HMC info */ + struct i40e_hmc_info hmc; /* HMC info struct */ + + /* LLDP/DCBX Status */ + u16 dcbx_status; + + /* DCBX info */ + struct i40e_dcbx_config local_dcbx_config; + struct i40e_dcbx_config remote_dcbx_config; + + /* debug mask */ + u32 debug_mask; +}; + +struct i40e_driver_version { + u8 major_version; + u8 minor_version; + u8 build_version; + u8 subbuild_version; + u8 driver_string[32]; +}; + +/* RX Descriptors */ +union i40e_16byte_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + } read; + struct { + struct { + struct { + union { + __le16 mirroring_status; + __le16 fcoe_ctx_id; + } mirr_fcoe; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fd_id; /* Flow director filter id */ + __le32 fcoe_param; /* FCoE DDP Context id */ + } hi_dword; + } qword0; + struct { + /* ext status/error/pktype/length */ + __le64 status_error_len; + } qword1; + } wb; /* writeback */ +}; + +union i40e_32byte_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + /* bit 0 of hdr_buffer_addr is DD bit */ + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + struct { + struct { + union { + __le16 mirroring_status; + __le16 fcoe_ctx_id; + } mirr_fcoe; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fcoe_param; /* FCoE DDP Context id */ + /* Flow director filter id in case of + * Programming status desc WB + */ + __le32 fd_id; + } hi_dword; + } qword0; + struct { + /* status/error/pktype/length */ + __le64 status_error_len; + } qword1; + struct { + __le16 ext_status; /* extended status */ + __le16 rsvd; + __le16 l2tag2_1; + __le16 l2tag2_2; + } qword2; + struct { + union { + __le32 flex_bytes_lo; + __le32 pe_status; + } lo_dword; + union { + __le32 flex_bytes_hi; + __le32 fd_id; + } hi_dword; + } qword3; + } wb; /* writeback */ +}; + +#define I40E_RXD_QW0_MIRROR_STATUS_SHIFT 8 +#define I40E_RXD_QW0_MIRROR_STATUS_MASK (0x3FUL << \ + I40E_RXD_QW0_MIRROR_STATUS_SHIFT) +#define I40E_RXD_QW0_FCOEINDX_SHIFT 0 +#define I40E_RXD_QW0_FCOEINDX_MASK (0xFFFUL << \ + I40E_RXD_QW0_FCOEINDX_SHIFT) + +enum i40e_rx_desc_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_STATUS_DD_SHIFT = 0, + I40E_RX_DESC_STATUS_EOF_SHIFT = 1, + I40E_RX_DESC_STATUS_L2TAG1P_SHIFT = 2, + I40E_RX_DESC_STATUS_L3L4P_SHIFT = 3, + I40E_RX_DESC_STATUS_CRCP_SHIFT = 4, + I40E_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */ + I40E_RX_DESC_STATUS_TSYNVALID_SHIFT = 7, + I40E_RX_DESC_STATUS_PIF_SHIFT = 8, + I40E_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */ + I40E_RX_DESC_STATUS_FLM_SHIFT = 11, + I40E_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */ + I40E_RX_DESC_STATUS_LPBK_SHIFT = 14, + I40E_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15, + I40E_RX_DESC_STATUS_RESERVED_SHIFT = 16, /* 2 BITS */ + I40E_RX_DESC_STATUS_UDP_0_SHIFT = 18, + I40E_RX_DESC_STATUS_LAST /* this entry must be last!!! */ +}; + +#define I40E_RXD_QW1_STATUS_SHIFT 0 +#define I40E_RXD_QW1_STATUS_MASK (((1 << I40E_RX_DESC_STATUS_LAST) - 1) << \ + I40E_RXD_QW1_STATUS_SHIFT) + +#define I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT I40E_RX_DESC_STATUS_TSYNINDX_SHIFT +#define I40E_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \ + I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT) + +#define I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT I40E_RX_DESC_STATUS_TSYNVALID_SHIFT +#define I40E_RXD_QW1_STATUS_TSYNVALID_MASK (0x1UL << \ + I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT) + +#define I40E_RXD_QW1_STATUS_UMBCAST_SHIFT I40E_RX_DESC_STATUS_UMBCAST +#define I40E_RXD_QW1_STATUS_UMBCAST_MASK (0x3UL << \ + I40E_RXD_QW1_STATUS_UMBCAST_SHIFT) + +enum i40e_rx_desc_fltstat_values { + I40E_RX_DESC_FLTSTAT_NO_DATA = 0, + I40E_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ + I40E_RX_DESC_FLTSTAT_RSV = 2, + I40E_RX_DESC_FLTSTAT_RSS_HASH = 3, +}; + +#define I40E_RXD_PACKET_TYPE_UNICAST 0 +#define I40E_RXD_PACKET_TYPE_MULTICAST 1 +#define I40E_RXD_PACKET_TYPE_BROADCAST 2 +#define I40E_RXD_PACKET_TYPE_MIRRORED 3 + +#define I40E_RXD_QW1_ERROR_SHIFT 19 +#define I40E_RXD_QW1_ERROR_MASK (0xFFUL << I40E_RXD_QW1_ERROR_SHIFT) + +enum i40e_rx_desc_error_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_ERROR_RXE_SHIFT = 0, + I40E_RX_DESC_ERROR_RECIPE_SHIFT = 1, + I40E_RX_DESC_ERROR_HBO_SHIFT = 2, + I40E_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */ + I40E_RX_DESC_ERROR_IPE_SHIFT = 3, + I40E_RX_DESC_ERROR_L4E_SHIFT = 4, + I40E_RX_DESC_ERROR_EIPE_SHIFT = 5, + I40E_RX_DESC_ERROR_OVERSIZE_SHIFT = 6, + I40E_RX_DESC_ERROR_PPRS_SHIFT = 7 +}; + +enum i40e_rx_desc_error_l3l4e_fcoe_masks { + I40E_RX_DESC_ERROR_L3L4E_NONE = 0, + I40E_RX_DESC_ERROR_L3L4E_PROT = 1, + I40E_RX_DESC_ERROR_L3L4E_FC = 2, + I40E_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3, + I40E_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4 +}; + +#define I40E_RXD_QW1_PTYPE_SHIFT 30 +#define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) + +/* Packet type non-ip values */ +enum i40e_rx_l2_ptype { + I40E_RX_PTYPE_L2_RESERVED = 0, + I40E_RX_PTYPE_L2_MAC_PAY2 = 1, + I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, + I40E_RX_PTYPE_L2_FIP_PAY2 = 3, + I40E_RX_PTYPE_L2_OUI_PAY2 = 4, + I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, + I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, + I40E_RX_PTYPE_L2_ECP_PAY2 = 7, + I40E_RX_PTYPE_L2_EVB_PAY2 = 8, + I40E_RX_PTYPE_L2_QCN_PAY2 = 9, + I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, + I40E_RX_PTYPE_L2_ARP = 11, + I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, + I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, + I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, + I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, + I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, + I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, + I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, + I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, + I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, + I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, + I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, + I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, + I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, + I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 +}; + +struct i40e_rx_ptype_decoded { + u32 ptype:8; + u32 known:1; + u32 outer_ip:1; + u32 outer_ip_ver:1; + u32 outer_frag:1; + u32 tunnel_type:3; + u32 tunnel_end_prot:2; + u32 tunnel_end_frag:1; + u32 inner_prot:4; + u32 payload_layer:3; +}; + +enum i40e_rx_ptype_outer_ip { + I40E_RX_PTYPE_OUTER_L2 = 0, + I40E_RX_PTYPE_OUTER_IP = 1 +}; + +enum i40e_rx_ptype_outer_ip_ver { + I40E_RX_PTYPE_OUTER_NONE = 0, + I40E_RX_PTYPE_OUTER_IPV4 = 0, + I40E_RX_PTYPE_OUTER_IPV6 = 1 +}; + +enum i40e_rx_ptype_outer_fragmented { + I40E_RX_PTYPE_NOT_FRAG = 0, + I40E_RX_PTYPE_FRAG = 1 +}; + +enum i40e_rx_ptype_tunnel_type { + I40E_RX_PTYPE_TUNNEL_NONE = 0, + I40E_RX_PTYPE_TUNNEL_IP_IP = 1, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, +}; + +enum i40e_rx_ptype_tunnel_end_prot { + I40E_RX_PTYPE_TUNNEL_END_NONE = 0, + I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, + I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, +}; + +enum i40e_rx_ptype_inner_prot { + I40E_RX_PTYPE_INNER_PROT_NONE = 0, + I40E_RX_PTYPE_INNER_PROT_UDP = 1, + I40E_RX_PTYPE_INNER_PROT_TCP = 2, + I40E_RX_PTYPE_INNER_PROT_SCTP = 3, + I40E_RX_PTYPE_INNER_PROT_ICMP = 4, + I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 +}; + +enum i40e_rx_ptype_payload_layer { + I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, +}; + +#define I40E_RX_PTYPE_BIT_MASK 0x0FFFFFFF +#define I40E_RX_PTYPE_SHIFT 56 + +#define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 +#define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ + I40E_RXD_QW1_LENGTH_PBUF_SHIFT) + +#define I40E_RXD_QW1_LENGTH_HBUF_SHIFT 52 +#define I40E_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \ + I40E_RXD_QW1_LENGTH_HBUF_SHIFT) + +#define I40E_RXD_QW1_LENGTH_SPH_SHIFT 63 +#define I40E_RXD_QW1_LENGTH_SPH_MASK (0x1ULL << \ + I40E_RXD_QW1_LENGTH_SPH_SHIFT) + +#define I40E_RXD_QW1_NEXTP_SHIFT 38 +#define I40E_RXD_QW1_NEXTP_MASK (0x1FFFULL << I40E_RXD_QW1_NEXTP_SHIFT) + +#define I40E_RXD_QW2_EXT_STATUS_SHIFT 0 +#define I40E_RXD_QW2_EXT_STATUS_MASK (0xFFFFFUL << \ + I40E_RXD_QW2_EXT_STATUS_SHIFT) + +enum i40e_rx_desc_ext_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0, + I40E_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1, + I40E_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */ + I40E_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */ + I40E_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9, + I40E_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10, + I40E_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11, +}; + +#define I40E_RXD_QW2_L2TAG2_SHIFT 0 +#define I40E_RXD_QW2_L2TAG2_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG2_SHIFT) + +#define I40E_RXD_QW2_L2TAG3_SHIFT 16 +#define I40E_RXD_QW2_L2TAG3_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG3_SHIFT) + +enum i40e_rx_desc_pe_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */ + I40E_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */ + I40E_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */ + I40E_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24, + I40E_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25, + I40E_RX_DESC_PE_STATUS_PORTV_SHIFT = 26, + I40E_RX_DESC_PE_STATUS_URG_SHIFT = 27, + I40E_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28, + I40E_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29 +}; + +#define I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38 +#define I40E_RX_PROG_STATUS_DESC_LENGTH 0x2000000 + +#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2 +#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \ + I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT) + +#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT 0 +#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_MASK (0x7FFFUL << \ + I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT) + +#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19 +#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \ + I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT) + +enum i40e_rx_prog_status_desc_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_PROG_STATUS_DESC_DD_SHIFT = 0, + I40E_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */ +}; + +enum i40e_rx_prog_status_desc_prog_id_masks { + I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1, + I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2, + I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4, +}; + +enum i40e_rx_prog_status_desc_error_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0, + I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1, + I40E_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2, + I40E_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3 +}; + +#define I40E_TWO_BIT_MASK 0x3 +#define I40E_THREE_BIT_MASK 0x7 +#define I40E_FOUR_BIT_MASK 0xF +#define I40E_EIGHTEEN_BIT_MASK 0x3FFFF + +/* TX Descriptor */ +struct i40e_tx_desc { + __le64 buffer_addr; /* Address of descriptor's data buf */ + __le64 cmd_type_offset_bsz; +}; + +#define I40E_TXD_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_QW1_DTYPE_MASK (0xFUL << I40E_TXD_QW1_DTYPE_SHIFT) + +enum i40e_tx_desc_dtype_value { + I40E_TX_DESC_DTYPE_DATA = 0x0, + I40E_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */ + I40E_TX_DESC_DTYPE_CONTEXT = 0x1, + I40E_TX_DESC_DTYPE_FCOE_CTX = 0x2, + I40E_TX_DESC_DTYPE_FILTER_PROG = 0x8, + I40E_TX_DESC_DTYPE_DDP_CTX = 0x9, + I40E_TX_DESC_DTYPE_FLEX_DATA = 0xB, + I40E_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC, + I40E_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD, + I40E_TX_DESC_DTYPE_DESC_DONE = 0xF +}; + +#define I40E_TXD_QW1_CMD_SHIFT 4 +#define I40E_TXD_QW1_CMD_MASK (0x3FFUL << I40E_TXD_QW1_CMD_SHIFT) + +enum i40e_tx_desc_cmd_bits { + I40E_TX_DESC_CMD_EOP = 0x0001, + I40E_TX_DESC_CMD_RS = 0x0002, + I40E_TX_DESC_CMD_ICRC = 0x0004, + I40E_TX_DESC_CMD_IL2TAG1 = 0x0008, + I40E_TX_DESC_CMD_DUMMY = 0x0010, + I40E_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ + I40E_TX_DESC_CMD_FCOET = 0x0080, + I40E_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */ +}; + +#define I40E_TXD_QW1_OFFSET_SHIFT 16 +#define I40E_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \ + I40E_TXD_QW1_OFFSET_SHIFT) + +enum i40e_tx_desc_length_fields { + /* Note: These are predefined bit offsets */ + I40E_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */ + I40E_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */ + I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */ +}; + +#define I40E_TXD_QW1_MACLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_MACLEN_SHIFT) +#define I40E_TXD_QW1_IPLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_IPLEN_SHIFT) +#define I40E_TXD_QW1_L4LEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) +#define I40E_TXD_QW1_FCLEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define I40E_TXD_QW1_TX_BUF_SZ_SHIFT 34 +#define I40E_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \ + I40E_TXD_QW1_TX_BUF_SZ_SHIFT) + +#define I40E_TXD_QW1_L2TAG1_SHIFT 48 +#define I40E_TXD_QW1_L2TAG1_MASK (0xFFFFULL << I40E_TXD_QW1_L2TAG1_SHIFT) + +/* Context descriptors */ +struct i40e_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 rsvd; + __le64 type_cmd_tso_mss; +}; + +#define I40E_TXD_CTX_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_CTX_QW1_DTYPE_MASK (0xFUL << I40E_TXD_CTX_QW1_DTYPE_SHIFT) + +#define I40E_TXD_CTX_QW1_CMD_SHIFT 4 +#define I40E_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << I40E_TXD_CTX_QW1_CMD_SHIFT) + +enum i40e_tx_ctx_desc_cmd_bits { + I40E_TX_CTX_DESC_TSO = 0x01, + I40E_TX_CTX_DESC_TSYN = 0x02, + I40E_TX_CTX_DESC_IL2TAG2 = 0x04, + I40E_TX_CTX_DESC_IL2TAG2_IL2H = 0x08, + I40E_TX_CTX_DESC_SWTCH_NOTAG = 0x00, + I40E_TX_CTX_DESC_SWTCH_UPLINK = 0x10, + I40E_TX_CTX_DESC_SWTCH_LOCAL = 0x20, + I40E_TX_CTX_DESC_SWTCH_VSI = 0x30, + I40E_TX_CTX_DESC_SWPE = 0x40 +}; + +#define I40E_TXD_CTX_QW1_TSO_LEN_SHIFT 30 +#define I40E_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \ + I40E_TXD_CTX_QW1_TSO_LEN_SHIFT) + +#define I40E_TXD_CTX_QW1_MSS_SHIFT 50 +#define I40E_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \ + I40E_TXD_CTX_QW1_MSS_SHIFT) + +#define I40E_TXD_CTX_QW1_VSI_SHIFT 50 +#define I40E_TXD_CTX_QW1_VSI_MASK (0x1FFULL << I40E_TXD_CTX_QW1_VSI_SHIFT) + +#define I40E_TXD_CTX_QW0_EXT_IP_SHIFT 0 +#define I40E_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \ + I40E_TXD_CTX_QW0_EXT_IP_SHIFT) + +enum i40e_tx_ctx_desc_eipt_offload { + I40E_TX_CTX_EXT_IP_NONE = 0x0, + I40E_TX_CTX_EXT_IP_IPV6 = 0x1, + I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2, + I40E_TX_CTX_EXT_IP_IPV4 = 0x3 +}; + +#define I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2 +#define I40E_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \ + I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT) + +#define I40E_TXD_CTX_QW0_NATT_SHIFT 9 +#define I40E_TXD_CTX_QW0_NATT_MASK (0x3ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) + +#define I40E_TXD_CTX_UDP_TUNNELING (0x1ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) +#define I40E_TXD_CTX_GRE_TUNNELING (0x2ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) + +#define I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT 11 +#define I40E_TXD_CTX_QW0_EIP_NOINC_MASK (0x1ULL << \ + I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT) + +#define I40E_TXD_CTX_EIP_NOINC_IPID_CONST I40E_TXD_CTX_QW0_EIP_NOINC_MASK + +#define I40E_TXD_CTX_QW0_NATLEN_SHIFT 12 +#define I40E_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \ + I40E_TXD_CTX_QW0_NATLEN_SHIFT) + +#define I40E_TXD_CTX_QW0_DECTTL_SHIFT 19 +#define I40E_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \ + I40E_TXD_CTX_QW0_DECTTL_SHIFT) + +struct i40e_nop_desc { + __le64 rsvd; + __le64 dtype_cmd; +}; + +#define I40E_TXD_NOP_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_NOP_QW1_DTYPE_MASK (0xFUL << I40E_TXD_NOP_QW1_DTYPE_SHIFT) + +#define I40E_TXD_NOP_QW1_CMD_SHIFT 4 +#define I40E_TXD_NOP_QW1_CMD_MASK (0x7FUL << I40E_TXD_NOP_QW1_CMD_SHIFT) + +enum i40e_tx_nop_desc_cmd_bits { + /* Note: These are predefined bit offsets */ + I40E_TX_NOP_DESC_EOP_SHIFT = 0, + I40E_TX_NOP_DESC_RS_SHIFT = 1, + I40E_TX_NOP_DESC_RSV_SHIFT = 2 /* 5 bits */ +}; + +struct i40e_filter_program_desc { + __le32 qindex_flex_ptype_vsi; + __le32 rsvd; + __le32 dtype_cmd_cntindex; + __le32 fd_id; +}; +#define I40E_TXD_FLTR_QW0_QINDEX_SHIFT 0 +#define I40E_TXD_FLTR_QW0_QINDEX_MASK (0x7FFUL << \ + I40E_TXD_FLTR_QW0_QINDEX_SHIFT) +#define I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT 11 +#define I40E_TXD_FLTR_QW0_FLEXOFF_MASK (0x7UL << \ + I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) +#define I40E_TXD_FLTR_QW0_PCTYPE_SHIFT 17 +#define I40E_TXD_FLTR_QW0_PCTYPE_MASK (0x3FUL << \ + I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) + +/* Packet Classifier Types for filters */ +enum i40e_filter_pctype { + /* Note: Values 0-30 are reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV4_UDP = 31, + /* Note: Value 32 is reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV4_TCP = 33, + I40E_FILTER_PCTYPE_NONF_IPV4_SCTP = 34, + I40E_FILTER_PCTYPE_NONF_IPV4_OTHER = 35, + I40E_FILTER_PCTYPE_FRAG_IPV4 = 36, + /* Note: Values 37-40 are reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV6_UDP = 41, + I40E_FILTER_PCTYPE_NONF_IPV6_TCP = 43, + I40E_FILTER_PCTYPE_NONF_IPV6_SCTP = 44, + I40E_FILTER_PCTYPE_NONF_IPV6_OTHER = 45, + I40E_FILTER_PCTYPE_FRAG_IPV6 = 46, + /* Note: Value 47 is reserved for future use */ + I40E_FILTER_PCTYPE_FCOE_OX = 48, + I40E_FILTER_PCTYPE_FCOE_RX = 49, + I40E_FILTER_PCTYPE_FCOE_OTHER = 50, + /* Note: Values 51-62 are reserved for future use */ + I40E_FILTER_PCTYPE_L2_PAYLOAD = 63, +}; + +enum i40e_filter_program_desc_dest { + I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET = 0x0, + I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX = 0x1, + I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER = 0x2, +}; + +enum i40e_filter_program_desc_fd_status { + I40E_FILTER_PROGRAM_DESC_FD_STATUS_NONE = 0x0, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID = 0x1, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES = 0x2, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES = 0x3, +}; + +#define I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT 23 +#define I40E_TXD_FLTR_QW0_DEST_VSI_MASK (0x1FFUL << \ + I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) + +#define I40E_TXD_FLTR_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_FLTR_QW1_DTYPE_MASK (0xFUL << I40E_TXD_FLTR_QW1_DTYPE_SHIFT) + +#define I40E_TXD_FLTR_QW1_CMD_SHIFT 4 +#define I40E_TXD_FLTR_QW1_CMD_MASK (0xFFFFULL << \ + I40E_TXD_FLTR_QW1_CMD_SHIFT) + +#define I40E_TXD_FLTR_QW1_PCMD_SHIFT (0x0ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_PCMD_MASK (0x7ULL << I40E_TXD_FLTR_QW1_PCMD_SHIFT) + +enum i40e_filter_program_desc_pcmd { + I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE = 0x1, + I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE = 0x2, +}; + +#define I40E_TXD_FLTR_QW1_DEST_SHIFT (0x3ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_DEST_MASK (0x3ULL << I40E_TXD_FLTR_QW1_DEST_SHIFT) + +#define I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT (0x7ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_CNT_ENA_MASK (0x1ULL << \ + I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT) + +#define I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT (0x9ULL + \ + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \ + I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) + +#define I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT 20 +#define I40E_TXD_FLTR_QW1_CNTINDEX_MASK (0x1FFUL << \ + I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) + +enum i40e_filter_type { + I40E_FLOW_DIRECTOR_FLTR = 0, + I40E_PE_QUAD_HASH_FLTR = 1, + I40E_ETHERTYPE_FLTR, + I40E_FCOE_CTX_FLTR, + I40E_MAC_VLAN_FLTR, + I40E_HASH_FLTR +}; + +struct i40e_vsi_context { + u16 seid; + u16 uplink_seid; + u16 vsi_number; + u16 vsis_allocated; + u16 vsis_unallocated; + u16 flags; + u8 pf_num; + u8 vf_num; + u8 connection_type; + struct i40e_aqc_vsi_properties_data info; +}; + +struct i40e_veb_context { + u16 seid; + u16 uplink_seid; + u16 veb_number; + u16 vebs_allocated; + u16 vebs_unallocated; + u16 flags; + struct i40e_aqc_get_veb_parameters_completion info; +}; + +/* Statistics collected by each port, VSI, VEB, and S-channel */ +struct i40e_eth_stats { + u64 rx_bytes; /* gorc */ + u64 rx_unicast; /* uprc */ + u64 rx_multicast; /* mprc */ + u64 rx_broadcast; /* bprc */ + u64 rx_discards; /* rdpc */ + u64 rx_unknown_protocol; /* rupp */ + u64 tx_bytes; /* gotc */ + u64 tx_unicast; /* uptc */ + u64 tx_multicast; /* mptc */ + u64 tx_broadcast; /* bptc */ + u64 tx_discards; /* tdpc */ + u64 tx_errors; /* tepc */ +}; + +/* Statistics collected by the MAC */ +struct i40e_hw_port_stats { + /* eth stats collected by the port */ + struct i40e_eth_stats eth; + + /* additional port specific stats */ + u64 tx_dropped_link_down; /* tdold */ + u64 crc_errors; /* crcerrs */ + u64 illegal_bytes; /* illerrc */ + u64 error_bytes; /* errbc */ + u64 mac_local_faults; /* mlfc */ + u64 mac_remote_faults; /* mrfc */ + u64 rx_length_errors; /* rlec */ + u64 link_xon_rx; /* lxonrxc */ + u64 link_xoff_rx; /* lxoffrxc */ + u64 priority_xon_rx[8]; /* pxonrxc[8] */ + u64 priority_xoff_rx[8]; /* pxoffrxc[8] */ + u64 link_xon_tx; /* lxontxc */ + u64 link_xoff_tx; /* lxofftxc */ + u64 priority_xon_tx[8]; /* pxontxc[8] */ + u64 priority_xoff_tx[8]; /* pxofftxc[8] */ + u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */ + u64 rx_size_64; /* prc64 */ + u64 rx_size_127; /* prc127 */ + u64 rx_size_255; /* prc255 */ + u64 rx_size_511; /* prc511 */ + u64 rx_size_1023; /* prc1023 */ + u64 rx_size_1522; /* prc1522 */ + u64 rx_size_big; /* prc9522 */ + u64 rx_undersize; /* ruc */ + u64 rx_fragments; /* rfc */ + u64 rx_oversize; /* roc */ + u64 rx_jabber; /* rjc */ + u64 tx_size_64; /* ptc64 */ + u64 tx_size_127; /* ptc127 */ + u64 tx_size_255; /* ptc255 */ + u64 tx_size_511; /* ptc511 */ + u64 tx_size_1023; /* ptc1023 */ + u64 tx_size_1522; /* ptc1522 */ + u64 tx_size_big; /* ptc9522 */ + u64 mac_short_packet_dropped; /* mspdc */ + u64 checksum_error; /* xec */ + /* flow director stats */ + u64 fd_atr_match; + u64 fd_sb_match; + /* EEE LPI */ + u32 tx_lpi_status; + u32 rx_lpi_status; + u64 tx_lpi_count; /* etlpic */ + u64 rx_lpi_count; /* erlpic */ +}; + +/* Checksum and Shadow RAM pointers */ +#define I40E_SR_NVM_CONTROL_WORD 0x00 +#define I40E_SR_PCIE_ANALOG_CONFIG_PTR 0x03 +#define I40E_SR_PHY_ANALOG_CONFIG_PTR 0x04 +#define I40E_SR_OPTION_ROM_PTR 0x05 +#define I40E_SR_RO_PCIR_REGS_AUTO_LOAD_PTR 0x06 +#define I40E_SR_AUTO_GENERATED_POINTERS_PTR 0x07 +#define I40E_SR_PCIR_REGS_AUTO_LOAD_PTR 0x08 +#define I40E_SR_EMP_GLOBAL_MODULE_PTR 0x09 +#define I40E_SR_RO_PCIE_LCB_PTR 0x0A +#define I40E_SR_EMP_IMAGE_PTR 0x0B +#define I40E_SR_PE_IMAGE_PTR 0x0C +#define I40E_SR_CSR_PROTECTED_LIST_PTR 0x0D +#define I40E_SR_MNG_CONFIG_PTR 0x0E +#define I40E_SR_EMP_MODULE_PTR 0x0F +#define I40E_SR_PBA_BLOCK_PTR 0x16 +#define I40E_SR_BOOT_CONFIG_PTR 0x17 +#define I40E_SR_NVM_IMAGE_VERSION 0x18 +#define I40E_SR_NVM_WAKE_ON_LAN 0x19 +#define I40E_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27 +#define I40E_SR_PERMANENT_SAN_MAC_ADDRESS_PTR 0x28 +#define I40E_SR_NVM_EETRACK_LO 0x2D +#define I40E_SR_NVM_EETRACK_HI 0x2E +#define I40E_SR_VPD_PTR 0x2F +#define I40E_SR_PXE_SETUP_PTR 0x30 +#define I40E_SR_PXE_CONFIG_CUST_OPTIONS_PTR 0x31 +#define I40E_SR_SW_ETHERNET_MAC_ADDRESS_PTR 0x37 +#define I40E_SR_POR_REGS_AUTO_LOAD_PTR 0x38 +#define I40E_SR_EMPR_REGS_AUTO_LOAD_PTR 0x3A +#define I40E_SR_GLOBR_REGS_AUTO_LOAD_PTR 0x3B +#define I40E_SR_CORER_REGS_AUTO_LOAD_PTR 0x3C +#define I40E_SR_PCIE_ALT_AUTO_LOAD_PTR 0x3E +#define I40E_SR_SW_CHECKSUM_WORD 0x3F +#define I40E_SR_1ST_FREE_PROVISION_AREA_PTR 0x40 +#define I40E_SR_4TH_FREE_PROVISION_AREA_PTR 0x42 +#define I40E_SR_3RD_FREE_PROVISION_AREA_PTR 0x44 +#define I40E_SR_2ND_FREE_PROVISION_AREA_PTR 0x46 +#define I40E_SR_EMP_SR_SETTINGS_PTR 0x48 + +/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */ +#define I40E_SR_VPD_MODULE_MAX_SIZE 1024 +#define I40E_SR_PCIE_ALT_MODULE_MAX_SIZE 1024 +#define I40E_SR_CONTROL_WORD_1_SHIFT 0x06 +#define I40E_SR_CONTROL_WORD_1_MASK (0x03 << I40E_SR_CONTROL_WORD_1_SHIFT) + +/* Shadow RAM related */ +#define I40E_SR_SECTOR_SIZE_IN_WORDS 0x800 +#define I40E_SR_BUF_ALIGNMENT 4096 +#define I40E_SR_WORDS_IN_1KB 512 +/* Checksum should be calculated such that after adding all the words, + * including the checksum word itself, the sum should be 0xBABA. + */ +#define I40E_SR_SW_CHECKSUM_BASE 0xBABA + +#define I40E_SRRD_SRCTL_ATTEMPTS 100000 + +enum i40e_switch_element_types { + I40E_SWITCH_ELEMENT_TYPE_MAC = 1, + I40E_SWITCH_ELEMENT_TYPE_PF = 2, + I40E_SWITCH_ELEMENT_TYPE_VF = 3, + I40E_SWITCH_ELEMENT_TYPE_EMP = 4, + I40E_SWITCH_ELEMENT_TYPE_BMC = 6, + I40E_SWITCH_ELEMENT_TYPE_PE = 16, + I40E_SWITCH_ELEMENT_TYPE_VEB = 17, + I40E_SWITCH_ELEMENT_TYPE_PA = 18, + I40E_SWITCH_ELEMENT_TYPE_VSI = 19, +}; + +/* Supported EtherType filters */ +enum i40e_ether_type_index { + I40E_ETHER_TYPE_1588 = 0, + I40E_ETHER_TYPE_FIP = 1, + I40E_ETHER_TYPE_OUI_EXTENDED = 2, + I40E_ETHER_TYPE_MAC_CONTROL = 3, + I40E_ETHER_TYPE_LLDP = 4, + I40E_ETHER_TYPE_EVB_PROTOCOL1 = 5, + I40E_ETHER_TYPE_EVB_PROTOCOL2 = 6, + I40E_ETHER_TYPE_QCN_CNM = 7, + I40E_ETHER_TYPE_8021X = 8, + I40E_ETHER_TYPE_ARP = 9, + I40E_ETHER_TYPE_RSV1 = 10, + I40E_ETHER_TYPE_RSV2 = 11, +}; + +/* Filter context base size is 1K */ +#define I40E_HASH_FILTER_BASE_SIZE 1024 +/* Supported Hash filter values */ +enum i40e_hash_filter_size { + I40E_HASH_FILTER_SIZE_1K = 0, + I40E_HASH_FILTER_SIZE_2K = 1, + I40E_HASH_FILTER_SIZE_4K = 2, + I40E_HASH_FILTER_SIZE_8K = 3, + I40E_HASH_FILTER_SIZE_16K = 4, + I40E_HASH_FILTER_SIZE_32K = 5, + I40E_HASH_FILTER_SIZE_64K = 6, + I40E_HASH_FILTER_SIZE_128K = 7, + I40E_HASH_FILTER_SIZE_256K = 8, + I40E_HASH_FILTER_SIZE_512K = 9, + I40E_HASH_FILTER_SIZE_1M = 10, +}; + +/* DMA context base size is 0.5K */ +#define I40E_DMA_CNTX_BASE_SIZE 512 +/* Supported DMA context values */ +enum i40e_dma_cntx_size { + I40E_DMA_CNTX_SIZE_512 = 0, + I40E_DMA_CNTX_SIZE_1K = 1, + I40E_DMA_CNTX_SIZE_2K = 2, + I40E_DMA_CNTX_SIZE_4K = 3, + I40E_DMA_CNTX_SIZE_8K = 4, + I40E_DMA_CNTX_SIZE_16K = 5, + I40E_DMA_CNTX_SIZE_32K = 6, + I40E_DMA_CNTX_SIZE_64K = 7, + I40E_DMA_CNTX_SIZE_128K = 8, + I40E_DMA_CNTX_SIZE_256K = 9, +}; + +/* Supported Hash look up table (LUT) sizes */ +enum i40e_hash_lut_size { + I40E_HASH_LUT_SIZE_128 = 0, + I40E_HASH_LUT_SIZE_512 = 1, +}; + +/* Structure to hold a per PF filter control settings */ +struct i40e_filter_control_settings { + /* number of PE Quad Hash filter buckets */ + enum i40e_hash_filter_size pe_filt_num; + /* number of PE Quad Hash contexts */ + enum i40e_dma_cntx_size pe_cntx_num; + /* number of FCoE filter buckets */ + enum i40e_hash_filter_size fcoe_filt_num; + /* number of FCoE DDP contexts */ + enum i40e_dma_cntx_size fcoe_cntx_num; + /* size of the Hash LUT */ + enum i40e_hash_lut_size hash_lut_size; + /* enable FDIR filters for PF and its VFs */ + bool enable_fdir; + /* enable Ethertype filters for PF and its VFs */ + bool enable_ethtype; + /* enable MAC/VLAN filters for PF and its VFs */ + bool enable_macvlan; +}; + +/* Structure to hold device level control filter counts */ +struct i40e_control_filter_stats { + u16 mac_etype_used; /* Used perfect match MAC/EtherType filters */ + u16 etype_used; /* Used perfect EtherType filters */ + u16 mac_etype_free; /* Un-used perfect match MAC/EtherType filters */ + u16 etype_free; /* Un-used perfect EtherType filters */ +}; + +enum i40e_reset_type { + I40E_RESET_POR = 0, + I40E_RESET_CORER = 1, + I40E_RESET_GLOBR = 2, + I40E_RESET_EMPR = 3, +}; +#ifdef I40E_DCB_SW + +/* EMP Settings Module Header Section */ +struct i40e_emp_settings_module { + u16 length; + u16 fw_params; + u16 reserved; + u16 features; + u16 oem_cfg; + u16 pfalloc_ptr; + u16 eee_variables; + u16 phy_cap_lan0_ptr; + u16 phy_cap_lan1_ptr; + u16 phy_cap_lan2_ptr; + u16 phy_cap_lan3_ptr; + u16 phy_map_lan0_ptr; + u16 phy_map_lan1_ptr; + u16 phy_map_lan2_ptr; + u16 phy_map_lan3_ptr; + u16 lldp_cfg_ptr; + u16 ltr_max_snoop; + u16 ltr_max_no_snoop; + u16 ltr_delta; + u16 ltr_grade_value; + u16 lldp_tlv_ptr; + u16 crc8; +}; + +/* IEEE 802.1AB LLDP Agent Variables from NVM */ +#define I40E_NVM_LLDP_CFG_PTR 0xF +struct i40e_lldp_variables { + u16 length; + u16 adminstatus; + u16 msgfasttx; + u16 msgtxinterval; + u16 txparams; + u16 timers; + u16 crc8; +}; +#endif /* I40E_DCB_SW */ + +/* Offsets into Alternate Ram */ +#define I40E_ALT_STRUCT_FIRST_PF_OFFSET 0 /* in dwords */ +#define I40E_ALT_STRUCT_DWORDS_PER_PF 64 /* in dwords */ +#define I40E_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET 0xD /* in dwords */ +#define I40E_ALT_STRUCT_USER_PRIORITY_OFFSET 0xC /* in dwords */ +#define I40E_ALT_STRUCT_MIN_BW_OFFSET 0xE /* in dwords */ +#define I40E_ALT_STRUCT_MAX_BW_OFFSET 0xF /* in dwords */ + +/* Alternate Ram Bandwidth Masks */ +#define I40E_ALT_BW_VALUE_MASK 0xFF +#define I40E_ALT_BW_RELATIVE_MASK 0x40000000 +#define I40E_ALT_BW_VALID_MASK 0x80000000 + +/* RSS Hash Table Size */ +#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000 +#endif /* _I40E_TYPE_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h b/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h new file mode 100644 index 0000000..5257b5d --- /dev/null +++ b/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h @@ -0,0 +1,373 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_VIRTCHNL_H_ +#define _I40E_VIRTCHNL_H_ + +#include "i40e_type.h" + +/* Description: + * This header file describes the VF-PF communication protocol used + * by the various i40e drivers. + * + * Admin queue buffer usage: + * desc->opcode is always i40e_aqc_opc_send_msg_to_pf + * flags, retval, datalen, and data addr are all used normally. + * Firmware copies the cookie fields when sending messages between the PF and + * VF, but uses all other fields internally. Due to this limitation, we + * must send all messages as "indirect", i.e. using an external buffer. + * + * All the vsi indexes are relative to the VF. Each VF can have maximum of + * three VSIs. All the queue indexes are relative to the VSI. Each VF can + * have a maximum of sixteen queues for all of its VSIs. + * + * The PF is required to return a status code in v_retval for all messages + * except RESET_VF, which does not require any response. The return value is of + * i40e_status_code type, defined in the i40e_type.h. + * + * In general, VF driver initialization should roughly follow the order of these + * opcodes. The VF driver must first validate the API version of the PF driver, + * then request a reset, then get resources, then configure queues and + * interrupts. After these operations are complete, the VF driver may start + * its queues, optionally add MAC and VLAN filters, and process traffic. + */ + +/* Opcodes for VF-PF communication. These are placed in the v_opcode field + * of the virtchnl_msg structure. + */ +enum i40e_virtchnl_ops { +/* VF sends req. to pf for the following + * ops. + */ + I40E_VIRTCHNL_OP_UNKNOWN = 0, + I40E_VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */ + I40E_VIRTCHNL_OP_RESET_VF, + I40E_VIRTCHNL_OP_GET_VF_RESOURCES, + I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE, + I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE, + I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES, + I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP, + I40E_VIRTCHNL_OP_ENABLE_QUEUES, + I40E_VIRTCHNL_OP_DISABLE_QUEUES, + I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS, + I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS, + I40E_VIRTCHNL_OP_ADD_VLAN, + I40E_VIRTCHNL_OP_DEL_VLAN, + I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, + I40E_VIRTCHNL_OP_GET_STATS, + I40E_VIRTCHNL_OP_FCOE, +/* PF sends status change events to vfs using + * the following op. + */ + I40E_VIRTCHNL_OP_EVENT, +}; + +/* Virtual channel message descriptor. This overlays the admin queue + * descriptor. All other data is passed in external buffers. + */ + +struct i40e_virtchnl_msg { + u8 pad[8]; /* AQ flags/opcode/len/retval fields */ + enum i40e_virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */ + enum i40e_status_code v_retval; /* ditto for desc->retval */ + u32 vfid; /* used by PF when sending to VF */ +}; + +/* Message descriptions and data structures.*/ + +/* I40E_VIRTCHNL_OP_VERSION + * VF posts its version number to the PF. PF responds with its version number + * in the same format, along with a return code. + * Reply from PF has its major/minor versions also in param0 and param1. + * If there is a major version mismatch, then the VF cannot operate. + * If there is a minor version mismatch, then the VF can operate but should + * add a warning to the system log. + * + * This enum element MUST always be specified as == 1, regardless of other + * changes in the API. The PF must always respond to this message without + * error regardless of version mismatch. + */ +#define I40E_VIRTCHNL_VERSION_MAJOR 1 +#define I40E_VIRTCHNL_VERSION_MINOR 0 +struct i40e_virtchnl_version_info { + u32 major; + u32 minor; +}; + +/* I40E_VIRTCHNL_OP_RESET_VF + * VF sends this request to PF with no parameters + * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register + * until reset completion is indicated. The admin queue must be reinitialized + * after this operation. + * + * When reset is complete, PF must ensure that all queues in all VSIs associated + * with the VF are stopped, all queue configurations in the HMC are set to 0, + * and all MAC and VLAN filters (except the default MAC address) on all VSIs + * are cleared. + */ + +/* I40E_VIRTCHNL_OP_GET_VF_RESOURCES + * VF sends this request to PF with no parameters + * PF responds with an indirect message containing + * i40e_virtchnl_vf_resource and one or more + * i40e_virtchnl_vsi_resource structures. + */ + +struct i40e_virtchnl_vsi_resource { + u16 vsi_id; + u16 num_queue_pairs; + enum i40e_vsi_type vsi_type; + u16 qset_handle; + u8 default_mac_addr[I40E_ETH_LENGTH_OF_ADDRESS]; +}; +/* VF offload flags */ +#define I40E_VIRTCHNL_VF_OFFLOAD_L2 0x00000001 +#define I40E_VIRTCHNL_VF_OFFLOAD_IWARP 0x00000002 +#define I40E_VIRTCHNL_VF_OFFLOAD_FCOE 0x00000004 +#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000 + +struct i40e_virtchnl_vf_resource { + u16 num_vsis; + u16 num_queue_pairs; + u16 max_vectors; + u16 max_mtu; + + u32 vf_offload_flags; + u32 max_fcoe_contexts; + u32 max_fcoe_filters; + + struct i40e_virtchnl_vsi_resource vsi_res[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE + * VF sends this message to set up parameters for one TX queue. + * External data buffer contains one instance of i40e_virtchnl_txq_info. + * PF configures requested queue and returns a status code. + */ + +/* Tx queue config info */ +struct i40e_virtchnl_txq_info { + u16 vsi_id; + u16 queue_id; + u16 ring_len; /* number of descriptors, multiple of 8 */ + u16 headwb_enabled; + u64 dma_ring_addr; + u64 dma_headwb_addr; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE + * VF sends this message to set up parameters for one RX queue. + * External data buffer contains one instance of i40e_virtchnl_rxq_info. + * PF configures requested queue and returns a status code. + */ + +/* Rx queue config info */ +struct i40e_virtchnl_rxq_info { + u16 vsi_id; + u16 queue_id; + u32 ring_len; /* number of descriptors, multiple of 32 */ + u16 hdr_size; + u16 splithdr_enabled; + u32 databuffer_size; + u32 max_pkt_size; + u64 dma_ring_addr; + enum i40e_hmc_obj_rx_hsplit_0 rx_split_pos; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES + * VF sends this message to set parameters for all active TX and RX queues + * associated with the specified VSI. + * PF configures queues and returns status. + * If the number of queues specified is greater than the number of queues + * associated with the VSI, an error is returned and no queues are configured. + */ +struct i40e_virtchnl_queue_pair_info { + /* NOTE: vsi_id and queue_id should be identical for both queues. */ + struct i40e_virtchnl_txq_info txq; + struct i40e_virtchnl_rxq_info rxq; +}; + +struct i40e_virtchnl_vsi_queue_config_info { + u16 vsi_id; + u16 num_queue_pairs; + struct i40e_virtchnl_queue_pair_info qpair[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP + * VF uses this message to map vectors to queues. + * The rxq_map and txq_map fields are bitmaps used to indicate which queues + * are to be associated with the specified vector. + * The "other" causes are always mapped to vector 0. + * PF configures interrupt mapping and returns status. + */ +struct i40e_virtchnl_vector_map { + u16 vsi_id; + u16 vector_id; + u16 rxq_map; + u16 txq_map; + u16 rxitr_idx; + u16 txitr_idx; +}; + +struct i40e_virtchnl_irq_map_info { + u16 num_vectors; + struct i40e_virtchnl_vector_map vecmap[1]; +}; + +/* I40E_VIRTCHNL_OP_ENABLE_QUEUES + * I40E_VIRTCHNL_OP_DISABLE_QUEUES + * VF sends these message to enable or disable TX/RX queue pairs. + * The queues fields are bitmaps indicating which queues to act upon. + * (Currently, we only support 16 queues per VF, but we make the field + * u32 to allow for expansion.) + * PF performs requested action and returns status. + */ +struct i40e_virtchnl_queue_select { + u16 vsi_id; + u16 pad; + u32 rx_queues; + u32 tx_queues; +}; + +/* I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS + * VF sends this message in order to add one or more unicast or multicast + * address filters for the specified VSI. + * PF adds the filters and returns status. + */ + +/* I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS + * VF sends this message in order to remove one or more unicast or multicast + * filters for the specified VSI. + * PF removes the filters and returns status. + */ + +struct i40e_virtchnl_ether_addr { + u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 pad[2]; +}; + +struct i40e_virtchnl_ether_addr_list { + u16 vsi_id; + u16 num_elements; + struct i40e_virtchnl_ether_addr list[1]; +}; + +/* I40E_VIRTCHNL_OP_ADD_VLAN + * VF sends this message to add one or more VLAN tag filters for receives. + * PF adds the filters and returns status. + * If a port VLAN is configured by the PF, this operation will return an + * error to the VF. + */ + +/* I40E_VIRTCHNL_OP_DEL_VLAN + * VF sends this message to remove one or more VLAN tag filters for receives. + * PF removes the filters and returns status. + * If a port VLAN is configured by the PF, this operation will return an + * error to the VF. + */ + +struct i40e_virtchnl_vlan_filter_list { + u16 vsi_id; + u16 num_elements; + u16 vlan_id[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE + * VF sends VSI id and flags. + * PF returns status code in retval. + * Note: we assume that broadcast accept mode is always enabled. + */ +struct i40e_virtchnl_promisc_info { + u16 vsi_id; + u16 flags; +}; + +#define I40E_FLAG_VF_UNICAST_PROMISC 0x00000001 +#define I40E_FLAG_VF_MULTICAST_PROMISC 0x00000002 + +/* I40E_VIRTCHNL_OP_GET_STATS + * VF sends this message to request stats for the selected VSI. VF uses + * the i40e_virtchnl_queue_select struct to specify the VSI. The queue_id + * field is ignored by the PF. + * + * PF replies with struct i40e_eth_stats in an external buffer. + */ + +/* I40E_VIRTCHNL_OP_EVENT + * PF sends this message to inform the VF driver of events that may affect it. + * No direct response is expected from the VF, though it may generate other + * messages in response to this one. + */ +enum i40e_virtchnl_event_codes { + I40E_VIRTCHNL_EVENT_UNKNOWN = 0, + I40E_VIRTCHNL_EVENT_LINK_CHANGE, + I40E_VIRTCHNL_EVENT_RESET_IMPENDING, + I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE, +}; +#define I40E_PF_EVENT_SEVERITY_INFO 0 +#define I40E_PF_EVENT_SEVERITY_ATTENTION 1 +#define I40E_PF_EVENT_SEVERITY_ACTION_REQUIRED 2 +#define I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM 255 + +struct i40e_virtchnl_pf_event { + enum i40e_virtchnl_event_codes event; + union { + struct { + enum i40e_aq_link_speed link_speed; + bool link_status; + } link_event; + } event_data; + + int severity; +}; + +/* VF reset states - these are written into the RSTAT register: + * I40E_VFGEN_RSTAT1 on the PF + * I40E_VFGEN_RSTAT on the VF + * When the PF initiates a reset, it writes 0 + * When the reset is complete, it writes 1 + * When the PF detects that the VF has recovered, it writes 2 + * VF checks this register periodically to determine if a reset has occurred, + * then polls it to know when the reset is complete. + * If either the PF or VF reads the register while the hardware + * is in a reset state, it will return DEADBEEF, which, when masked + * will result in 3. + */ +enum i40e_vfr_states { + I40E_VFR_INPROGRESS = 0, + I40E_VFR_COMPLETED, + I40E_VFR_VFACTIVE, + I40E_VFR_UNKNOWN, +}; + +#endif /* _I40E_VIRTCHNL_H_ */ -- 1.8.1.4 </PRE> <!--endarticle--> <HR> <P><UL> <!--threads--> <LI>Previous message: <A HREF="003088.html">[dpdk-dev] [PATCH v2 00/27] Add i40e PMD support </A></li> <LI>Next message: <A HREF="003093.html">[dpdk-dev] [PATCH v2 02/27] i40e: add PMD source files </A></li> <LI> <B>Messages sorted by:</B> <a href="date.html#3141">[ date ]</a> <a href="thread.html#3141">[ thread ]</a> <a href="subject.html#3141">[ subject ]</a> <a href="author.html#3141">[ author ]</a> </LI> </UL> <hr> <a href="http://dpdk.org/ml/listinfo/dev">More information about the dev mailing list</a><br> </body></html> \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 From thomas.monjalon@6wind.com Mon Feb 2 22:50:34 2015 Return-Path: <thomas.monjalon@6wind.com> Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by dpdk.org (Postfix) with ESMTP id 19BAF37B7 for <dev@dpdk.org>; Mon, 2 Feb 2015 22:50:34 +0100 (CET) Received: by mail-wi0-f171.google.com with SMTP id l15so20017153wiw.4 for <dev@dpdk.org>; Mon, 02 Feb 2015 13:50:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=Zh8Zestxc/BUnRqnBdFXDDx3j41PpqFmvQno0BHw1tg=; b=QDxlj//v1RJTVAnzWbTisif2a5P2VTOXP22iswq2btoOeOfFSoxaSIQNkEwBWDvpQS T2DKEyMYHMxmzi8XpBnir8Bs9oJM2xbh6VnExybmq1rBkmLVD5SvsOUwUfF2SpuNWbLa C0BNFCT/hHWmPy/FisiuA6HVNTm96lKxXFgO87EgkdyL6c3js5+5R5ixplMjY0HG3jy3 m9AmWXNiFA0kKLpQIRj4EkN3v5fq1dQqLIpJBpvXykCiNcs1mxaw/zVs1ALB+rk+KFbM 8AeLFp6O+8N+DSepotoAVdbMDTZk7ZeznAYYi7W60KnhRMORZEon6g2vjm/HrgK5UTkO HDqg== X-Gm-Message-State: ALoCoQmmopOdkGLUIderwCj43MEzuXStIDNrD0EWdfCoVuMdwMRUqfl4daatcbir7V2zEJcNIb8O X-Received: by 10.194.81.193 with SMTP id c1mr44978549wjy.11.1422899129942; Mon, 02 Feb 2015 09:45:29 -0800 (PST) Received: from localhost.localdomain (136-92-190-109.dsl.ovh.fr. [109.190.92.136]) by mx.google.com with ESMTPSA id a20sm20813428wic.16.2015.02.02.09.45.28 for <dev@dpdk.org> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 02 Feb 2015 09:45:29 -0800 (PST) From: Thomas Monjalon <thomas.monjalon@6wind.com> To: dev@dpdk.org Date: Mon, 2 Feb 2015 18:44:53 +0100 Message-Id: <1422899093-20207-3-git-send-email-thomas.monjalon@6wind.com> X-Mailer: git-send-email 2.2.2 In-Reply-To: <1422899093-20207-1-git-send-email-thomas.monjalon@6wind.com> References: <1422554832-30093-1-git-send-email-thomas.monjalon@6wind.com> <1422899093-20207-1-git-send-email-thomas.monjalon@6wind.com> Subject: [dpdk-dev] [PATCH v2 2/2] eal: add help option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> X-List-Received-Date: Mon, 02 Feb 2015 21:50:34 -0000 Help is printed with -h or --help. Help is also printed for an unknown option. This was broken since the rework of options. Fixes: 489a9d6c9f77 ("merge bsd and linux common options parsing") Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com> --- lib/librte_eal/bsdapp/eal/eal.c | 7 ++++++- lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/linuxapp/eal/eal.c | 8 +++++++- 4 files changed, 18 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 69f3c03..ca2f445 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -326,8 +326,10 @@ eal_parse_args(int argc, char **argv) int ret; /* getopt is not happy, stop right now */ - if (opt == '?') + if (opt == '?') { + eal_usage(prgname); return -1; + } ret = eal_parse_common_option(opt, optarg, &internal_config); /* common parser is not happy */ @@ -340,6 +342,9 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case 'h': + eal_usage(prgname); + exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 4890e78..05d80bd 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -57,6 +57,7 @@ eal_short_options[] = "b:" /* pci-blacklist */ "c:" /* coremask */ "d:" /* driver */ + "h" /* help */ "l:" /* corelist */ "m:" /* memory size */ "n:" /* memory channels */ @@ -70,6 +71,7 @@ eal_long_options[] = { {OPT_BASE_VIRTADDR, 1, NULL, OPT_BASE_VIRTADDR_NUM }, {OPT_CREATE_UIO_DEV, 1, NULL, OPT_CREATE_UIO_DEV_NUM }, {OPT_FILE_PREFIX, 1, NULL, OPT_FILE_PREFIX_NUM }, + {OPT_HELP, 0, NULL, OPT_HELP_NUM }, {OPT_HUGE_DIR, 1, NULL, OPT_HUGE_DIR_NUM }, {OPT_LOG_LEVEL, 1, NULL, OPT_LOG_LEVEL_NUM }, {OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM }, @@ -605,6 +607,7 @@ eal_common_usage(void) " --"OPT_SYSLOG" Set syslog facility\n" " --"OPT_LOG_LEVEL" Set default log level\n" " -v Display version information on startup\n" + " -h, --help This help\n" "\nEAL options for DEBUG use only:\n" " --"OPT_NO_HUGE" Use malloc instead of hugetlbfs\n" " --"OPT_NO_PCI" Disable PCI\n" diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index fe1c85d..d199a00 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -35,6 +35,8 @@ enum { /* long options mapped to a short option */ +#define OPT_HELP "help" + OPT_HELP_NUM = 'h', #define OPT_PCI_BLACKLIST "pci-blacklist" OPT_PCI_BLACKLIST_NUM = 'b', #define OPT_PCI_WHITELIST "pci-whitelist" diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index e3955e7..d8c0628 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -520,8 +520,10 @@ eal_parse_args(int argc, char **argv) int ret; /* getopt is not happy, stop right now */ - if (opt == '?') + if (opt == '?') { + eal_usage(prgname); return -1; + } ret = eal_parse_common_option(opt, optarg, &internal_config); /* common parser is not happy */ @@ -534,6 +536,10 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case 'h': + eal_usage(prgname); + exit(EXIT_SUCCESS); + /* force loading of external driver */ case 'd': solib = malloc(sizeof(*solib)); -- 2.2.2 ^ permalink raw reply [flat|nested] 147+ messages in thread
end of thread, other threads:[~2015-05-07 13:39 UTC | newest] Thread overview: 147+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-01-30 5:07 [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 01/18] fm10k: add base driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 01/15] fm10k: add base driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 01/17] fm10k: add base driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 01/16] fm10k: add base driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 02/16] eal: add fm10k device id Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 03/16] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 04/16] config: change config files to add fm10k into compile Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 05/16] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 06/16] fm10k: add Rx queue setup/release function Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 07/16] fm10k: add Tx " Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 08/16] fm10k: add Rx/Tx single queue start/stop function Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 09/16] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 10/16] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 11/16] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 12/16] fm10k: add scatter receive function Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 13/16] fm10k: add function to set vlan Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 14/16] fm10k: add SRIOV-VF support Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 15/16] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-17 14:18 ` [dpdk-dev] [PATCH v6 16/16] maintainers: claim for fm10k review Chen Jing D(Mark) 2015-02-18 0:13 ` [dpdk-dev] [PATCH v6 00/16] lib/librte_pmd_fm10k : fm10k pmd driver Thomas Monjalon 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 02/17] eal: add fm10k device id Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 03/17] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 04/17] Change config files to add fm10k into compile Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 05/17] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 06/17] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) 2015-02-13 11:08 ` David Marchand 2015-02-17 13:01 ` Chen, Jing D 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 07/17] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 08/17] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) 2015-02-13 11:31 ` David Marchand 2015-02-13 16:45 ` Jeff Shaw 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 09/17] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 10/17] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-13 11:42 ` David Marchand 2015-02-17 13:07 ` Chen, Jing D 2015-02-13 11:53 ` David Marchand 2015-02-17 13:10 ` Chen, Jing D 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 11/17] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 12/17] fm10k: Add scatter receive function Chen Jing D(Mark) 2015-02-13 11:55 ` David Marchand 2015-02-17 13:11 ` Chen, Jing D 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 13/17] fm10k: add function to set vlan Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 14/17] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 15/17] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-13 11:42 ` David Marchand 2015-02-17 13:12 ` Chen, Jing D 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 16/17] maintainers: claim for fm10k review Chen Jing D(Mark) 2015-02-13 8:19 ` [dpdk-dev] [PATCH v5 17/17] fm10k: Add ABI version of librte_pmd_fm10k Chen Jing D(Mark) 2015-02-13 8:37 ` [dpdk-dev] [PATCH v5 00/17] lib/librte_pmd_fm10k : fm10k pmd driver Zhang, Helin 2015-02-15 5:07 ` Qiu, Michael 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 02/15] eal: add fm10k device id Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-11 17:28 ` Jeff Shaw 2015-02-12 4:04 ` Chen, Jing D 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 11/15] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) 2015-02-11 17:32 ` Jeff Shaw 2015-02-12 4:04 ` Chen, Jing D 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 13/15] fm10k: add function to set vlan Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-11 1:31 ` [dpdk-dev] [PATCH v4 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-11 1:50 ` [dpdk-dev] [PATCH v4 00/15] lib/librte_pmd_fm10k : fm10k pmd driver Qiu, Michael 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 02/15] eal: add fm10k device id Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 11/15] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 13/15] fm10k: add function to set vlan Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-10 7:02 ` [dpdk-dev] [PATCH v3 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 02/15] fm10k: add fm10k device id Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 03/15] fm10k: register fm10k pmd PF driver Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 04/15] Change config files to add fm10k into compile Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 05/15] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 06/15] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 07/15] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 08/15] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 09/15] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 10/15] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 11/15] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 12/15] fm10k: Add scatter receive function Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 13/15] fm10k: add function to set vlan Chen Jing D(Mark) 2015-02-04 10:40 ` [dpdk-dev] [PATCH v2 14/15] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-02-04 10:41 ` [dpdk-dev] [PATCH v2 15/15] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 02/18] Change config/ files to add macros for fm10k Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 03/18] fm10k: Add empty fm10k files Chen Jing D(Mark) 2015-01-31 14:02 ` Neil Horman 2015-02-02 5:34 ` Chen, Jing D [not found] ` <20150202133848.GA21700@hmsreliant.think-freely.org> 2015-02-03 6:47 ` Chen, Jing D 2015-02-01 13:01 ` David Marchand 2015-01-30 5:07 ` [dpdk-dev] [PATCH 04/18] fm10k: add fm10k device id Chen Jing D(Mark) 2015-01-31 14:19 ` Neil Horman 2015-01-31 16:07 ` David Marchand 2015-01-31 16:32 ` Neil Horman 2015-01-31 16:55 ` David Marchand 2015-01-31 18:35 ` Neil Horman 2015-05-07 11:06 ` David Marchand 2015-05-07 13:36 ` Neil Horman 2015-05-07 13:39 ` David Marchand 2015-02-02 7:54 ` Chen, Jing D 2015-01-30 5:07 ` [dpdk-dev] [PATCH 05/18] fm10k: Add code to register fm10k pmd PF driver Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 06/18] fm10k: add reta update/requery functions Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 07/18] fm10k: add rx_queue_setup/release function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 08/18] fm10k: add tx_queue_setup/release function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 09/18] fm10k: add RX/TX single queue start/stop function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 10/18] fm10k: add dev start/stop functions Chen Jing D(Mark) 2015-02-04 2:36 ` Qiu, Michael 2015-02-04 9:55 ` Chen, Jing D 2015-01-30 5:07 ` [dpdk-dev] [PATCH 11/18] fm10k: add receive and tranmit function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 12/18] fm10k: add PF RSS support Chen Jing D(Mark) 2015-02-01 0:38 ` Neil Horman 2015-01-30 5:07 ` [dpdk-dev] [PATCH 13/18] fm10k: Add scatter receive function Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 14/18] fm10k: add function to set vlan Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 15/18] fm10k: Add SRIOV-VF support Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 16/18] fm10k: add PF and VF interrupt handling function Chen Jing D(Mark) 2015-02-01 0:42 ` Neil Horman 2015-02-02 7:59 ` Chen, Jing D 2015-01-30 5:07 ` [dpdk-dev] [PATCH 17/18] Change lib/Makefile to add fm10k driver into compile list Chen Jing D(Mark) 2015-01-30 5:07 ` [dpdk-dev] [PATCH 18/18] Change mk/rte.app.mk to add fm10k lib into link Chen Jing D(Mark) 2015-02-01 0:50 ` Neil Horman 2015-02-02 8:10 ` Chen, Jing D 2015-01-30 21:26 ` [dpdk-dev] [PATCH 00/18] lib/librte_pmd_fm10k : fm10k pmd driver Neil Horman 2015-01-30 21:46 ` Jeff Shaw 2015-01-30 22:19 ` Thomas Monjalon 2015-02-02 2:59 ` Chen, Jing D 2015-02-02 8:19 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).