DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC 0/9] add new avf PMD
@ 2017-10-20  8:26 Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 1/9] net/avf/base: add base code for " Jingjing Wu
                   ` (10 more replies)
  0 siblings, 11 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports
for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens
to be an adaptive VF driver, every new drop of the VF driver would
add more and more advanced features that can be turned on in the VM
if the underlying HW device supports those advanced features. Most
importantly in a device agnostic way without ever compromising on the
base functionality. All the AVF's interface need to follow AVF spec,
and AVF compliant interface is supported start from the
Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization 
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature

Which need to be done in later version
 - Vectored Rx and Tx func
 - Rx interrupt support
 - Statistics query
 - performance tuning

Jingjing Wu (9):
  net/avf/base: add base code for avf PMD
  net/avf: initilization of avf PMD
  net/avf: enable queue and device
  net/avf: enable basic Rx Tx func
  net/avf: enable link status update
  net/avf: enable ops for MAC VLAN offload
  net/avf: enable ops for rss setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface

 config/common_base                      |    8 +
 doc/guides/nics/intel_vf.rst            |   16 +-
 drivers/net/Makefile                    |    2 +
 drivers/net/avf/Makefile                |   93 +
 drivers/net/avf/avf.h                   |  244 +++
 drivers/net/avf/avf_ethdev.c            | 1287 ++++++++++++++
 drivers/net/avf/avf_log.h               |   66 +
 drivers/net/avf/avf_rxtx.c              | 1532 +++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  257 +++
 drivers/net/avf/avf_vchnl.c             |  817 +++++++++
 drivers/net/avf/base/avf_adminq.c       | 1002 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2807 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1843 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  192 +++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  107 ++
 drivers/net/avf/base/avf_type.h         | 1990 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  772 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   64 +-
 drivers/net/i40e/i40e_ethdev.h          |    4 +
 drivers/net/i40e/i40e_pf.c              |  136 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 30 files changed, 14499 insertions(+), 25 deletions(-)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 1/9] net/avf/base: add base code for avf PMD
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 2/9] net/avf: initilization of " Jingjing Wu
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_log.h             |   52 +
 drivers/net/avf/base/avf_adminq.c     | 1002 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2807 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1843 ++++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  192 +++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  107 ++
 drivers/net/avf/base/avf_type.h       | 1990 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  772 +++++++++
 15 files changed, 10039 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..431f0f3
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,52 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_LOGS_H_
+#define _AVF_LOGS_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOGS_H_ */
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..1e3aedc
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1002 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Writeback timeout.\n");
+		status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#else
+	ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..be32fb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,169 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+#ifdef AVF_ESS_SUPPORT
+#define AVF_ASQ_CMD_TIMEOUT_ESS	50000000  /* usecs */
+#endif
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..5b9ed38
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2807 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	__le16	lb_mode;
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	u32	reg_address;
+	u32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD	0x01
+#define AVF_AQ_NVM_FLASH_ONLY	0x80
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..d67297b
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1843 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+	cmd->reg_value = reg_val;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = cmd->reg_value;
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_offload_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_offload_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config - pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..268f97a
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,192 @@
+/******************************************************************************
+
+  Copyright (c) 2001-2015, Intel Corporation
+  All rights reserved.
+
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions are met:
+
+   1. Redistributions of source code must retain the above copyright notice,
+      this list of conditions and the following disclaimer.
+
+   2. Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in the
+      documentation and/or other materials provided with the distribution.
+
+   3. Neither the name of the Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived from
+      this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+  ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+  LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+  CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+  SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+  INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+  POSSIBILITY OF SUCH DAMAGE.
+******************************************************************************/
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..644c16d
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,107 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..36ad76d
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,1990 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT	8
+#define AVF_NVM_TRANS_MASK	(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_CON		0x0
+#define AVF_NVM_SNT		0x1
+#define AVF_NVM_LCB		0x2
+#define AVF_NVM_SA		(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA		0x4
+#define AVF_NVM_CSUM		0x8
+#define AVF_NVM_EXEC		0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_SR_EMP_MODULE_PTR			0x0F
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR		0xD
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..f00dd36
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,772 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF offload flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_offload_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support; otherwise it will respond with the
+ * number requested.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct eth_stats in an external buffer.
+ */
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table*/
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if ((valid_len != msglen) || (err_msg_format))
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 2/9] net/avf: initilization of avf PMD
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 1/9] net/avf/base: add base code for " Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:02   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 3/9] net/avf: enable queue and device Jingjing Wu
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   2 +
 drivers/net/avf/Makefile                |  92 +++++++
 drivers/net/avf/avf.h                   | 221 +++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 475 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 335 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 1135 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index d9471e8..e5f96ee 100644
--- a/config/common_base
+++ b/config/common_base
@@ -211,6 +211,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 5d2ad2f..e108424 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -69,6 +69,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
 DEPDIRS-i40e = $(core-libs) librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
 DEPDIRS-ixgbe = $(core-libs) librte_hash
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
+DEPDIRS-avf = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio
 DEPDIRS-liquidio = $(core-libs)
 DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..27e2eec
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,92 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+CFLAGS += -DX722_A0_SUPPORT
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER = -wd593 -wd188
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER += -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+else
+CFLAGS_BASE_DRIVER  = -Wno-sign-compare
+CFLAGS_BASE_DRIVER += -Wno-unused-value
+CFLAGS_BASE_DRIVER += -Wno-unused-parameter
+CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
+CFLAGS_BASE_DRIVER += -Wno-format
+CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
+CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
+CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
+CFLAGS_BASE_DRIVER += -Wno-format-security
+CFLAGS_BASE_DRIVER += -Wno-unused-variable
+
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
+CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
+
+endif
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..10bffe4
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,221 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VIRTCHNL_VF_OFFLOAD_L2 |        \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF |    \
+	VIRTCHNL_VF_OFFLOAD_VLAN |      \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished yet */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];  /* queue bit mask for each vector */
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_simple_allowed;
+	bool tx_vec_allowed;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..08853c9
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,475 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint16_t interval =
+		avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_CLEARPBA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		       AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t icr0;
+
+	avf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	icr0 = AVF_READ_REG(hw, AVFINT_ICR01);
+
+	/* No interrupt event indicated */
+	if (!(icr0 & AVFINT_ICR01_INTEVENT_MASK)) {
+		PMD_DRV_LOG(DEBUG, "No interrupt event, nothing to do");
+		goto done;
+	}
+
+	if (icr0 & AVFINT_ICR01_ADMINQ_MASK) {
+		PMD_DRV_LOG(DEBUG, "ICR01_ADMINQ is reported");
+		avf_handle_virtchnl_msg(dev);
+	}
+
+done:
+	avf_enable_irq0(hw);
+	rte_intr_enable(dev->intr_handle);
+}
+
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE |
+				    RTE_ETH_DEV_INTR_LSC;
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);;
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host, generate a random one */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/*
+ * virtual function driver struct
+ */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		"memzone %s allocated with physical address: %"PRIu64,
+		mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		"memzone %s to be freed with physical address: %"PRIu64,
+		((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+	return;
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..afbb6a0
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,335 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* for other virtchnl ops in running time, wait for  the cmd done flag */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						"expect %u, get %u",
+						vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG, "adminq response is received,"
+					     " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/**
+ * avf_check_api_version
+ * @dev: pointer to eth device
+ *
+ * Check API version with sync wait until version read or fail from admin queue
+ */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if ((vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START) ||
+	    ((vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START) &&
+	     (vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START))) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if ((vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR) ||
+		   ((vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR) &&
+		    (vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR))) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	} else
+		PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	  * add advanced/optional offload capabilities
+	  */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..a70bd19
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8192b98..aa83c65 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 3/9] net/avf: enable queue and device
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 1/9] net/avf/base: add base code for " Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 2/9] net/avf: initilization of " Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:04   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func Jingjing Wu
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 355 ++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 647 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 203 ++++++++++++++
 drivers/net/avf/avf_vchnl.c  | 360 ++++++++++++++++++++++++
 6 files changed, 1584 insertions(+)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 27e2eec..a8dd8b7 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -88,5 +88,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 10bffe4..8255f55 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -70,6 +70,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -218,4 +225,15 @@ _atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 08853c9..f968314 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -60,6 +60,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -69,9 +77,355 @@ static const struct rte_pci_id pci_id_avf_map[] = {
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the bulk
+	 * allocation or vector Rx preconditions we will reset it.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_simple_allowed = true;
+	ad->tx_vec_allowed = true;
+
+	/* Vlan stripping setting */
+	if (dev_conf->rxmode.hw_vlan_strip)
+		avf_enable_vlan_strip(ad);
+	else
+		avf_disable_vlan_strip(ad);
+	return 0;
+}
+
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		return 0;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported to set
+	 * based on rss_conf->rss_hf. */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set correctly */
+	if (dev->data->dev_conf.rxmode.jumbo_frame == 1) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				"larger than %u and smaller than %u, as jumbo "
+				"frame is enabled", (uint32_t)ETHER_MAX_LEN,
+					(uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				"larger than %u and smaller than %u, as jumbo "
+				"frame is disabled", (uint32_t)ETHER_MIN_LEN,
+						(uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if (dev_data->dev_conf.rxmode.enable_scatter ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * accoding to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do RX init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)) {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for descriptor
+		 * write back. */
+		vf->nb_msix = 1;
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_CLEARPBA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		AVF_WRITE_FLUSH(hw);
+
+		if (avf_config_irq_map(adapter)) {
+			PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+			goto err_queue;
+		}
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -297,6 +651,7 @@ avf_dev_close(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..28f0b5e
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,647 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, AVF_RX_MAX_BURST, nb_desc, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh descriptors
+	 * have been used. The TX descriptor ring will be cleaned after tx_free_thresh
+	 * descriptors are used or if the number of descriptors required to
+	 * transmit a packet is greater than the number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_dma_addr_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    (nb_desc > AVF_MAX_RING_DESC) ||
+	    (nb_desc < AVF_MIN_RING_DESC)) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate . */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+					   ring_size, AVF_RING_BASE_ALIGN,
+					   socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	memset(mz->addr, 0, ring_size); /* Zero all the descriptors in the ring. */
+	rxq->rx_ring_phys_addr = mz->phys_addr;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    (nb_desc > AVF_MAX_RING_DESC) ||
+	    (nb_desc < AVF_MIN_RING_DESC)) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				  sizeof(struct avf_tx_queue),
+				  RTE_CACHE_LINE_SIZE,
+				  socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			    "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+					   ring_size, AVF_RING_BASE_ALIGN,
+					   socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->phys_addr;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
\ No newline at end of file
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..9bdceb7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,203 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint8_t port_id;        /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	uint16_t nb_used;              /* number of used desc since RS bit set */
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint8_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+
+#ifdef RTE_LIBRTE_AVF_RX_DUMP
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id);
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_TX_DUMP
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id);
+#else
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index afbb6a0..90c77fb 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -53,6 +53,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -222,6 +223,48 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		 PMD_DRV_LOG(ERR, "Failed to execute command of "
+			     "OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		 PMD_DRV_LOG(ERR, "Failed to execute command of "
+			     "OP_DISABLE_VLAN_STRIPPING");
+
+	 return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -333,3 +376,320 @@ avf_get_vf_resource(struct avf_adapter *adapter)
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_ENABLE_QUEUES");
+		return -ENOSYS;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_DISABLE_QUEUES");
+		return -ENOSYS;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+			    "%s", on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_CONFIG_RSS_LUT");
+		err = -ENOSYS;
+	}
+	rte_free(rss_lut);
+	return err;
+}
+
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_CONFIG_RSS_KEY");
+		err = -ENOSYS;
+	}
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+		err = -ENOSYS;
+	}
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	/* TODO: how to map the msix interrupt? considering about Rx interrupt and FVL
+	 * write back limitition
+	 */
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (2 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 3/9] net/avf: enable queue and device Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:06   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 5/9] net/avf: enable link status update Jingjing Wu
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 config/common_base           |   3 +
 drivers/net/avf/avf_ethdev.c |  17 +-
 drivers/net/avf/avf_log.h    |  14 +
 drivers/net/avf/avf_rxtx.c   | 740 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |  46 +++
 5 files changed, 819 insertions(+), 1 deletion(-)

diff --git a/config/common_base b/config/common_base
index e5f96ee..dac4cd3 100644
--- a/config/common_base
+++ b/config/common_base
@@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
+CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index f968314..b4d3153 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -229,9 +229,12 @@ avf_init_queues(struct rte_eth_dev *dev)
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * accoding to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -592,7 +595,19 @@ avf_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE |
 				    RTE_ETH_DEV_INTR_LSC;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index 431f0f3..782a6e7 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -49,4 +49,18 @@ extern int avf_logtype_driver;
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOGS_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 28f0b5e..95992fc 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -644,4 +644,744 @@ avf_stop_queues(struct rte_eth_dev *dev)
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
+}
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(__rte_unused void *rx_queue,
+			__rte_unused struct rte_mbuf **rx_pkts,
+			__rte_unused uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			/* TODO: count rx_mbuf_alloc_failed */
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_dma_addr_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		/* TODO: support rxm->packet_type here */
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(__rte_unused void *rx_queue,
+			__rte_unused struct rte_mbuf **rx_pkts,
+			__rte_unused uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			/* TODO: support count rx_mbuf_alloc_failed */
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_dma_addr_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		/* TODO: support first_seg->packet_type here */
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+			"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_DRV_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.outer_l2_len +
+		  tx_offload.outer_l3_len +
+		  tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+		tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* Always enable CRC offload insertion */
+		td_cmd |= AVF_TX_DESC_CMD_ICRC;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n"
+				"type_cmd_tso_mss: %#"PRIx64";\n",
+				tx_pkt, tx_id,
+				ctx_txd->type_cmd_tso_mss);
+
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_dma_addr(m_seg);
+
+			PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n"
+				"buf_dma_addr: %#"PRIx64";\n"
+				"td_cmd: %#x;\n"
+				"td_offset: %#x;\n"
+				"td_len: %u;\n"
+				"td_tag: %#x;\n",
+				tx_pkt, tx_id, buf_dma_addr,
+				td_cmd, td_offset, slen, td_tag);
+
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Setting RS bit on TXD id="
+					"%4u (port=%d queue=%d)",
+					tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   (unsigned)txq->port_id, (unsigned)txq->queue_id,
+		   (unsigned)tx_id, (unsigned)nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* m->nb_segs is uint8_t, so nb_segs is always less than
+		 * AVF_TX_MAX_SEG.
+		 * We check only a condition for nb_segs > AVF_TX_MAX_MTU_SEG.
+		 */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+				(m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose rx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
 }
\ No newline at end of file
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 9bdceb7..de98ce3 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -48,6 +48,28 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TD_CMD (AVF_TX_DESC_CMD_ICRC |\
+		    AVF_TX_DESC_CMD_EOP)
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -114,6 +136,19 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/** Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /**< L3 (IP) Header Length. */
+		uint64_t l4_len:8; /**< L4 Header Length. */
+		uint64_t tso_segsz:16; /**< TCP TSO segment size */
+		uint64_t outer_l2_len:8; /**< outer L2 Header Length */
+		uint64_t outer_l3_len:16; /**< outer L3 Header Length */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -134,6 +169,17 @@ int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 5/9] net/avf: enable link status update
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (3 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-10-20  8:26 ` [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload Jingjing Wu
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_ethdev.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c  | 35 ++++++++++++++++++++++++++++++-
 2 files changed, 84 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index b4d3153..0b9f39a 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -68,6 +68,8 @@ static void avf_dev_stop(struct rte_eth_dev *dev);
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static int avf_dev_link_update(struct rte_eth_dev *dev,
+			       __rte_unused int wait_to_complete);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -82,6 +84,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +432,53 @@ avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 }
 
 static int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds & 
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&(dev->data->dev_link),
+			    *(uint64_t *)&(dev->data->dev_link),
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 90c77fb..1df33ac 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -160,6 +160,38 @@ avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -199,7 +231,8 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (4 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 5/9] net/avf: enable link status update Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:07   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting Jingjing Wu
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf.h        |   5 +
 drivers/net/avf/avf_ethdev.c | 211 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c  |  89 ++++++++++++++++++
 3 files changed, 305 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 8255f55..24ca120 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -236,4 +236,9 @@ int avf_configure_rss_key(struct avf_adapter *adapter);
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 0b9f39a..a9cea86 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -70,6 +70,21 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static int avf_dev_link_update(struct rte_eth_dev *dev,
 			       __rte_unused int wait_to_complete);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+			    struct ether_addr *addr,
+			    uint32_t index,
+			    uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static void avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
+>>>>>>> 50b4111... net/avf: enable ops for MAC VLAN offload
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -85,6 +100,14 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.link_update                = avf_dev_link_update,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -93,6 +116,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -478,6 +502,193 @@ avf_dev_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	ret = avf_add_del_vlan(adapter, vlan_id, on);
+	return ret;
+}
+
+static void
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			avf_enable_vlan_strip(adapter);
+		else
+			avf_disable_vlan_strip(adapter);
+	}
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC: %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC: %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 1df33ac..c31803d 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -726,3 +726,92 @@ avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		      bool enable_unicast,
+		      bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) + \
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (5 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:07   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status Jingjing Wu
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_ethdev.c | 173 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 173 insertions(+)

diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index a9cea86..d3946d6 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -82,6 +82,17 @@ static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static void avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 >>>>>>> 50b4111... net/avf: enable ops for MAC VLAN offload
@@ -117,6 +128,11 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -646,6 +662,163 @@ avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* TODO: HENA setting, is it enabled by default? */
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* TODO: HENA? What is the flow type mapping for AVF?
+	 * Just set it to default value now.
+	 */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if ((mtu < ETHER_MIN_MTU) || (frame_size > AVF_FRAME_SIZE_MAX))
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.jumbo_frame = 1;
+	else
+		dev->data->dev_conf.rxmode.jumbo_frame = 0;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (6 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-22  0:09   ` Ferruh Yigit
  2017-10-20  8:26 ` [dpdk-dev] [RFC 9/9] net/i40e: support AVF basic interface Jingjing Wu
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_done
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/intel_vf.rst |  16 ++++-
 drivers/net/avf/avf_ethdev.c |   8 +++
 drivers/net/avf/avf_rxtx.c   | 145 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   8 +++
 4 files changed, 175 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..3adb684 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,18 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d3946d6..550bead 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -132,7 +132,15 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_done         = avf_dev_rx_desc_done,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	/* TODO: Get statistics/xstatistics */
+	/* TODO: Rx interrupt */
 };
 
 static int
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 95992fc..b3fe550 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1384,4 +1384,149 @@ avf_set_tx_function(struct rte_eth_dev *dev)
 {
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
+}
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
+	while ((desc < rxq->nb_rx_desc) &&
+		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+				(1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_done(void *rx_queue, uint16_t offset)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq = rx_queue;
+	uint16_t desc;
+	int ret;
+
+	if (unlikely(offset >= rxq->nb_rx_desc)) {
+		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
+		return 0;
+	}
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	rxdp = &(rxq->rx_ring[desc]);
+
+	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		  AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+		 (1 << AVF_RX_DESC_STATUS_DD_SHIFT));
+
+	return ret;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
 }
\ No newline at end of file
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index de98ce3..c52bd5f 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -180,6 +180,14 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_done(void *rx_queue, uint16_t offset);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [RFC 9/9] net/i40e: support AVF basic interface
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (7 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status Jingjing Wu
@ 2017-10-20  8:26 ` Jingjing Wu
  2017-11-21 23:58 ` [dpdk-dev] [RFC 0/9] add new avf PMD Ferruh Yigit
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
  10 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-10-20  8:26 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  64 +++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   4 ++
 drivers/net/i40e/i40e_pf.c     | 136 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 187 insertions(+), 23 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index f40c463..a032952 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3676,6 +3676,7 @@ i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3692,14 +3693,21 @@ i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++)
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3723,8 +3731,16 @@ i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6692,17 +6708,20 @@ i40e_pf_disable_rss(struct i40e_pf *pf)
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6719,8 +6738,18 @@ i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6732,6 +6761,7 @@ i40e_get_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t *key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6748,8 +6778,20 @@ i40e_get_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t *key_len)
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+				*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+					   sizeof(uint32_t);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+				*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+					   sizeof(uint32_t);
+		}
 	}
 	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 2f1905e..46dc1a4 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -426,6 +426,8 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	struct virtchnl_version_info version; /* version of the virtchnl from VF */
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1189,6 +1191,8 @@ int i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb);
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index eef7243..9d41a7b 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -274,19 +274,23 @@ i40e_pf_host_send_msg_to_vf(struct i40e_pf_vf *vf,
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -310,11 +314,13 @@ i40e_pf_host_process_cmd_reset_vf(struct i40e_pf_vf *vf)
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -338,11 +344,31 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default, doesn't support hena setting by virtchnnl yet. */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR without binding queue to interrupt vector.*/
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1091,6 +1117,85 @@ i40e_pf_host_process_cmd_disable_vlan_strip(struct i40e_pf_vf *vf, bool b_op)
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+					uint8_t *msg,
+					uint16_t msglen,
+					bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (msg == NULL || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+					uint8_t *msg,
+					uint16_t msglen,
+					bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (msg == NULL || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1197,7 +1302,7 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1205,7 +1310,7 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1265,6 +1370,13 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
 		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 0411663..196d71e 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -37,6 +37,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 0/9] add new avf PMD
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (8 preceding siblings ...)
  2017-10-20  8:26 ` [dpdk-dev] [RFC 9/9] net/i40e: support AVF basic interface Jingjing Wu
@ 2017-11-21 23:58 ` Ferruh Yigit
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
  10 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-21 23:58 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> Adaptive Virtual Function (AVF) Driver is VF driver which supports
> for all future Intel devices without requiring a VM update.
> It promises the basic high speed connectivity. And since this happens
> to be an adaptive VF driver, every new drop of the VF driver would
> add more and more advanced features that can be turned on in the VM
> if the underlying HW device supports those advanced features. Most
> importantly in a device agnostic way without ever compromising on the
> base functionality. All the AVF's interface need to follow AVF spec,
> and AVF compliant interface is supported start from the
> Intel® Ethernet Controller 710 Series.

This looks like a good idea.

Still there will be device specific drivers, right?
AVF will cover only basic features of all future Intel NICs.

> 
> This patch set adds AVF PMD supporting.
>  - Device initialization 
>  - Queue setup and Device start
>  - Basic Rx and Tx.
>  - MAC address offload feature
>  - Vlan offload feature
>  - RSS offload feature
> 
> Which need to be done in later version
>  - Vectored Rx and Tx func
>  - Rx interrupt support
>  - Statistics query
>  - performance tuning
> 
> Jingjing Wu (9):
>   net/avf/base: add base code for avf PMD
>   net/avf: initilization of avf PMD
>   net/avf: enable queue and device
>   net/avf: enable basic Rx Tx func
>   net/avf: enable link status update
>   net/avf: enable ops for MAC VLAN offload
>   net/avf: enable ops for rss setting
>   net/avf: enable ops to check queue info and status
>   net/i40e: support AVF basic interface

Overall comment to whole patchset:

- Missing some documentation:
  Driver documentation, with describing config options as well
  .ini file, please update it per patch that add feature
  release notes update to announce new PMD

- There are some checkpatch warnings even except base files

- Commit logs and patch titles missing details and doesn't cover all
modifications in the patch.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 2/9] net/avf: initilization of avf PMD
  2017-10-20  8:26 ` [dpdk-dev] [RFC 2/9] net/avf: initilization of " Jingjing Wu
@ 2017-11-22  0:02   ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:02 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

>  #
> +# Compile burst-oriented AVF PMD driver
> +#
> +CONFIG_RTE_LIBRTE_AVF_PMD=y

Lets start PMD disabled and enable it after it become functional.

If you need to run a git bisect in the future on this commit, and you have a AVF
supported device. Device will be probed but since this patch is missing
avf_eth_dev_ops, I am not sure how app behaves, it may fail or crash, and you
can't complete git bisect run because of this patch.

<...>

> @@ -69,6 +69,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e
>  DEPDIRS-i40e = $(core-libs) librte_hash
>  DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe
>  DEPDIRS-ixgbe = $(core-libs) librte_hash
> +DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf

Can you please add this in alphabetically sorted manner?

> +DEPDIRS-avf = $(core-libs)

This is changed in prev release, DEPDIRS removed and library dependency part
moved to individual driver files as LDLIBS variable.

<...>

> +#
> +# Add extra flags for base driver files (also known as shared code)
> +# to disable warnings
> +#
> +ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
> +CFLAGS_BASE_DRIVER = -wd593 -wd188
> +else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
> +CFLAGS_BASE_DRIVER += -Wno-sign-compare
> +CFLAGS_BASE_DRIVER += -Wno-unused-value
> +CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
> +CFLAGS_BASE_DRIVER += -Wno-format
> +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
> +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
> +CFLAGS_BASE_DRIVER += -Wno-unused-variable

Lots of these common for clang and gcc, can it be possible to remove duplication?

> +else
> +CFLAGS_BASE_DRIVER  = -Wno-sign-compare
> +CFLAGS_BASE_DRIVER += -Wno-unused-value
> +CFLAGS_BASE_DRIVER += -Wno-unused-parameter
> +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing
> +CFLAGS_BASE_DRIVER += -Wno-format
> +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers
> +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast
> +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral
> +CFLAGS_BASE_DRIVER += -Wno-format-security
> +CFLAGS_BASE_DRIVER += -Wno-unused-variable

Are these options to remove warnings specific to this driver? Looks like
copy-paste from old driver.

I believe we should reduce number of disabled compiler warning as much as
possible, what do you think removing them all and add back if it is needed?
This may cause a few compile fix patches but if they are send before integration
deadline, they can be squashed in next-net.

<...>

> +static int
> +avf_init_vf(struct rte_eth_dev *dev)
> +{
> +	int i, err, bufsz;
> +	struct avf_adapter *adapter =
> +		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
> +	uint16_t interval =
> +		avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
> +
> +	err = avf_set_mac_type(hw);
> +	if (err) {
> +		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);

You may want to dynamically register log type and level (rte_log_register,
rte_log_set_level) for avf_logtype_init & avf_logtype_driver before start using
loggig functions.

<...>

> +/*
> + * virtual function driver struct
> + */
> +static struct rte_pci_driver rte_avf_pmd = {
> +	.id_table = pci_id_avf_map,
> +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING,

RTE_PCI_DRV_IOVA_AS_VA ? As it has been added to i40e vf driver.

<...>

> diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
> new file mode 100644
> index 0000000..a70bd19
> --- /dev/null
> +++ b/drivers/net/avf/rte_pmd_avf_version.map
> @@ -0,0 +1,4 @@
> +DPDK_17.11 {

Needs to be changed for new release.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 3/9] net/avf: enable queue and device
  2017-10-20  8:26 ` [dpdk-dev] [RFC 3/9] net/avf: enable queue and device Jingjing Wu
@ 2017-11-22  0:04   ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:04 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu, Shahaf Shuler

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> enable device and queue setup ops like:
> 
>  - dev_configure
>  - dev_start
>  - dev_stop
>  - dev_close
>  - dev_infos_get
>  - rx_queue_start
>  - rx_queue_stop
>  - tx_queue_start
>  - tx_queue_stop
>  - rx_queue_setup
>  - rx_queue_release
>  - tx_queue_setup
>  - tx_queue_release
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

>  static int
> +avf_dev_configure(struct rte_eth_dev *dev)
> +{
> +	struct avf_adapter *ad =
> +		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> +
> +	/* Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> +	 * allocation or vector Rx preconditions we will reset it.
> +	 */
> +	ad->rx_vec_allowed = true;
> +	ad->tx_simple_allowed = true;
> +	ad->tx_vec_allowed = true;
> +
> +	/* Vlan stripping setting */
> +	if (dev_conf->rxmode.hw_vlan_strip)

What about using new method for offloading configuration:
ce17eddefc20 ("ethdev: introduce Rx queue offloads API")
cba7f53b717d ("ethdev: introduce Tx queue offloads API")

cc'ed Shahaf if support needed.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-10-20  8:26 ` [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func Jingjing Wu
@ 2017-11-22  0:06   ` Ferruh Yigit
  2017-11-22  0:57     ` Stephen Hemminger
  2017-11-22  7:55     ` Wu, Jingjing
  0 siblings, 2 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:06 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

<...>

> @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>  # Compile burst-oriented AVF PMD driver
>  #
>  CONFIG_RTE_LIBRTE_AVF_PMD=y
> +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
> +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n

Are these config options used?

<...>

> @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
>  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
>  #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
>  
> +#ifdef RTE_LIBRTE_AVF_DEBUG_TX

Is this defined anywhere?

> +#define PMD_TX_LOG(level, fmt, args...) \
> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)

Instead should RTE_LOG_DP used?
And since other macros uses dynamic log functions, why here use static method,
what do you think using new method for data path logs as well?

<...>

> +static inline void
> +avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
> +{
> +	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
> +		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
> +		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;

Please new flag instead of PKT_RX_VLAN_PKT and please be sure flag is correctly
used with its new meaning.

<...>

> +/* TX prep functions */
> +uint16_t
> +avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
> +	      uint16_t nb_pkts)
> +{
> +	int i, ret;
> +	uint64_t ol_flags;
> +	struct rte_mbuf *m;
> +
> +	for (i = 0; i < nb_pkts; i++) {
> +		m = tx_pkts[i];
> +		ol_flags = m->ol_flags;
> +
> +		/* m->nb_segs is uint8_t, so nb_segs is always less than
> +		 * AVF_TX_MAX_SEG.
> +		 * We check only a condition for nb_segs > AVF_TX_MAX_MTU_SEG.
> +		 */

This is wrong, nb_segs is 16bits now, this check has been updated in i40e already.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload
  2017-10-20  8:26 ` [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload Jingjing Wu
@ 2017-11-22  0:07   ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:07 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
>  - promiscuous_enable
>  - promiscuous_disable
>  - allmulticast_enable
>  - allmulticast_disable
>  - mac_addr_add
>  - mac_addr_remove
>  - mac_addr_set

+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,


And does more than what patch title says.

<...>

> +static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
> +					 struct ether_addr *mac_addr);
> +>>>>>>> 50b4111... net/avf: enable ops for MAC VLAN offload

This looks like a merge artifact.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting
  2017-10-20  8:26 ` [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting Jingjing Wu
@ 2017-11-22  0:07   ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:07 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 10/20/2017 1:26 AM, Jingjing Wu wrote:

+ mtu_set which both commit log and patch title doesn't mention.

> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status
  2017-10-20  8:26 ` [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status Jingjing Wu
@ 2017-11-22  0:09   ` Ferruh Yigit
  2017-11-22  8:23     ` Wu, Jingjing
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22  0:09 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu, Olivier MATZ

On 10/20/2017 1:26 AM, Jingjing Wu wrote:
>  - rxq_info_get
>  - txq_info_get
>  - rx_queue_count
>  - rx_descriptor_done
>  - rx_descriptor_status
>  - tx_descriptor_status

+ some documentation.

<...>

> @@ -28,8 +28,8 @@
>      (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>      OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>  
> -I40E/IXGBE/IGB Virtual Function Driver

AVF doesn't cover ixgbe/igb virtual functions right? Perhaps it can be good to
keep information for them.

<...>

> @@ -132,7 +132,15 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
>  	.reta_query                 = avf_dev_rss_reta_query,
>  	.rss_hash_update            = avf_dev_rss_hash_update,
>  	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
> +	.rxq_info_get               = avf_dev_rxq_info_get,
> +	.txq_info_get               = avf_dev_txq_info_get,
> +	.rx_queue_count             = avf_dev_rxq_count,
> +	.rx_descriptor_done         = avf_dev_rx_desc_done,

If you implemented "rx_descriptor_status" no need to implement
"rx_descriptor_done", one covers other.

cc'ed Olivier if I am missing anything.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-11-22  0:06   ` Ferruh Yigit
@ 2017-11-22  0:57     ` Stephen Hemminger
  2017-11-22 23:15       ` Ferruh Yigit
  2017-11-22  7:55     ` Wu, Jingjing
  1 sibling, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2017-11-22  0:57 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Jingjing Wu, dev, wenzhuo.lu

On Tue, 21 Nov 2017 16:06:24 -0800
Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>  
> 
> <...>
> 
> > @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >  # Compile burst-oriented AVF PMD driver
> >  #
> >  CONFIG_RTE_LIBRTE_AVF_PMD=y
> > +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
> > +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n  
> 
> Are these config options used?
> 
> <...>
> 
> > @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
> >  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> >  #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
> >  
> > +#ifdef RTE_LIBRTE_AVF_DEBUG_TX  
> 
> Is this defined anywhere?
> 
> > +#define PMD_TX_LOG(level, fmt, args...) \
> > +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)  
> 
> Instead should RTE_LOG_DP used?
> And since other macros uses dynamic log functions, why here use static method,
> what do you think using new method for data path logs as well?
> 
> <...>
> 
> > +static inline void
> > +avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
> > +{
> > +	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
> > +		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
> > +		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;  
> 
> Please new flag instead of PKT_RX_VLAN_PKT and please be sure flag is correctly
> used with its new meaning.
> 
> <...>
> 
> > +/* TX prep functions */
> > +uint16_t
> > +avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
> > +	      uint16_t nb_pkts)
> > +{
> > +	int i, ret;
> > +	uint64_t ol_flags;
> > +	struct rte_mbuf *m;
> > +
> > +	for (i = 0; i < nb_pkts; i++) {
> > +		m = tx_pkts[i];
> > +		ol_flags = m->ol_flags;
> > +
> > +		/* m->nb_segs is uint8_t, so nb_segs is always less than
> > +		 * AVF_TX_MAX_SEG.
> > +		 * We check only a condition for nb_segs > AVF_TX_MAX_MTU_SEG.
> > +		 */  
> 
> This is wrong, nb_segs is 16bits now, this check has been updated in i40e already.
> 
> <...>

Most drivers base code of one of the legacy Intel drivers.
Why not fix ixgbe (or similar) to be a "follow this model" reference?

It is unreasonable to expect new drivers to follow a different pattern.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-11-22  0:06   ` Ferruh Yigit
  2017-11-22  0:57     ` Stephen Hemminger
@ 2017-11-22  7:55     ` Wu, Jingjing
  2017-11-22 22:38       ` Ferruh Yigit
  1 sibling, 1 reply; 151+ messages in thread
From: Wu, Jingjing @ 2017-11-22  7:55 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, November 22, 2017 8:06 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
> 
> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> <...>
> 
> > @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >  # Compile burst-oriented AVF PMD driver  #
> > CONFIG_RTE_LIBRTE_AVF_PMD=y
> > +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
> > +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n
> 
> Are these config options used?
> 
Yes, some macros are defined in avf_rxtx.h for dump descriptors. Will merge them with AVF_DEBUG_TX/RX.

> <...>
> 
> > @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
> >  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)  #define
> > PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
> >
> > +#ifdef RTE_LIBRTE_AVF_DEBUG_TX
> 
> Is this defined anywhere?
Will merge it with AVF_TX_DUMP.

> 
> > +#define PMD_TX_LOG(level, fmt, args...) \
> > +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> 
> Instead should RTE_LOG_DP used?
> And since other macros uses dynamic log functions, why here use static method,
> what do you think using new method for data path logs as well?
> 
This is used for fast path debug, so static macro will benefit performance.

> <...>
> 
> > +static inline void
> > +avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc
> > +*rxdp) {
> > +	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
> > +		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
> > +		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;
> 
> Please new flag instead of PKT_RX_VLAN_PKT and please be sure flag is
> correctly used with its new meaning.
> 
> <...>
> 
> > +/* TX prep functions */
> > +uint16_t
> > +avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
> > +	      uint16_t nb_pkts)
> > +{
> > +	int i, ret;
> > +	uint64_t ol_flags;
> > +	struct rte_mbuf *m;
> > +
> > +	for (i = 0; i < nb_pkts; i++) {
> > +		m = tx_pkts[i];
> > +		ol_flags = m->ol_flags;
> > +
> > +		/* m->nb_segs is uint8_t, so nb_segs is always less than
> > +		 * AVF_TX_MAX_SEG.
> > +		 * We check only a condition for nb_segs >
> AVF_TX_MAX_MTU_SEG.
> > +		 */
> 
> This is wrong, nb_segs is 16bits now, this check has been updated in i40e
> already.
> 
Will change, Thanks

> <...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status
  2017-11-22  0:09   ` Ferruh Yigit
@ 2017-11-22  8:23     ` Wu, Jingjing
  0 siblings, 0 replies; 151+ messages in thread
From: Wu, Jingjing @ 2017-11-22  8:23 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo, Olivier MATZ



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, November 22, 2017 8:09 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Olivier MATZ
> <olivier.matz@6wind.com>
> Subject: Re: [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and
> status
> 
> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> >  - rxq_info_get
> >  - txq_info_get
> >  - rx_queue_count
> >  - rx_descriptor_done
> >  - rx_descriptor_status
> >  - tx_descriptor_status
> 
> + some documentation.
> 
> <...>
> 
> > @@ -28,8 +28,8 @@
> >      (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> >      OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> >
> > -I40E/IXGBE/IGB Virtual Function Driver
> 
> AVF doesn't cover ixgbe/igb virtual functions right? Perhaps it can be good to
> keep information for them.
> 
Just rename "I40E/IXGBE/IGB Virtual Function Driver" to be " Intel Virtual Function Driver" which doesn't indicate AVF.

And i40e, ixgbe, igb NICs are described below in this doc.

> <...>
> 
> > @@ -132,7 +132,15 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
> >  	.reta_query                 = avf_dev_rss_reta_query,
> >  	.rss_hash_update            = avf_dev_rss_hash_update,
> >  	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
> > +	.rxq_info_get               = avf_dev_rxq_info_get,
> > +	.txq_info_get               = avf_dev_txq_info_get,
> > +	.rx_queue_count             = avf_dev_rxq_count,
> > +	.rx_descriptor_done         = avf_dev_rx_desc_done,
> 
> If you implemented "rx_descriptor_status" no need to implement
> "rx_descriptor_done", one covers other.
> 

Thanks, will change.

> cc'ed Olivier if I am missing anything.
> 
> <...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-11-22  7:55     ` Wu, Jingjing
@ 2017-11-22 22:38       ` Ferruh Yigit
  2017-11-23  1:17         ` Wu, Jingjing
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22 22:38 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Lu, Wenzhuo

On 11/21/2017 11:55 PM, Wu, Jingjing wrote:
> 
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Wednesday, November 22, 2017 8:06 AM
>> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
>> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Subject: Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
>>
>> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>>
>> <...>
>>
>>> @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>>>  # Compile burst-oriented AVF PMD driver  #
>>> CONFIG_RTE_LIBRTE_AVF_PMD=y
>>> +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
>>> +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n
>>
>> Are these config options used?
>>
> Yes, some macros are defined in avf_rxtx.h for dump descriptors. Will merge them with AVF_DEBUG_TX/RX.
> 
>> <...>
>>
>>> @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
>>>  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)  #define
>>> PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
>>>
>>> +#ifdef RTE_LIBRTE_AVF_DEBUG_TX
>>
>> Is this defined anywhere?
> Will merge it with AVF_TX_DUMP.
> 
>>
>>> +#define PMD_TX_LOG(level, fmt, args...) \
>>> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
>>
>> Instead should RTE_LOG_DP used?
>> And since other macros uses dynamic log functions, why here use static method,
>> what do you think using new method for data path logs as well?
>>
> This is used for fast path debug, so static macro will benefit performance.

How it will benefit?

The PMD_TX_LOG macro controlled by a specific compile time option,
RTE_LIBRTE_AVF_DEBUG_TX. If this config is disabled the logging won't be part of
all binary at all.

When that config option enabled, what is the difference with macro and dynamic
debug call? Eventually both are rte_log calls. Only macro has dependency to
RTE_LOGTYPE_xxx static definitions.

> 
>> <...>
>>
>>> +static inline void
>>> +avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc
>>> +*rxdp) {
>>> +	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
>>> +		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
>>> +		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;
>>
>> Please new flag instead of PKT_RX_VLAN_PKT and please be sure flag is
>> correctly used with its new meaning.

Just reminder of this one, new flag is "PKT_RX_VLAN" which means mbuf contains
vlan information.

>>
>> <...>
>>
>>> +/* TX prep functions */
>>> +uint16_t
>>> +avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
>>> +	      uint16_t nb_pkts)
>>> +{
>>> +	int i, ret;
>>> +	uint64_t ol_flags;
>>> +	struct rte_mbuf *m;
>>> +
>>> +	for (i = 0; i < nb_pkts; i++) {
>>> +		m = tx_pkts[i];
>>> +		ol_flags = m->ol_flags;
>>> +
>>> +		/* m->nb_segs is uint8_t, so nb_segs is always less than
>>> +		 * AVF_TX_MAX_SEG.
>>> +		 * We check only a condition for nb_segs >
>> AVF_TX_MAX_MTU_SEG.
>>> +		 */
>>
>> This is wrong, nb_segs is 16bits now, this check has been updated in i40e
>> already.
>>
> Will change, Thanks
> 
>> <...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-11-22  0:57     ` Stephen Hemminger
@ 2017-11-22 23:15       ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-11-22 23:15 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Jingjing Wu, dev, wenzhuo.lu

On 11/21/2017 4:57 PM, Stephen Hemminger wrote:
> On Tue, 21 Nov 2017 16:06:24 -0800
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> 
>> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>  
>>
>> <...>
>>
>>> @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>>>  # Compile burst-oriented AVF PMD driver
>>>  #
>>>  CONFIG_RTE_LIBRTE_AVF_PMD=y
>>> +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
>>> +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n  
>>
>> Are these config options used?
>>
>> <...>
>>
>>> @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
>>>  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
>>>  #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
>>>  
>>> +#ifdef RTE_LIBRTE_AVF_DEBUG_TX  
>>
>> Is this defined anywhere?
>>
>>> +#define PMD_TX_LOG(level, fmt, args...) \
>>> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)  
>>
>> Instead should RTE_LOG_DP used?
>> And since other macros uses dynamic log functions, why here use static method,
>> what do you think using new method for data path logs as well?
>>
>> <...>
>>
>>> +static inline void
>>> +avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
>>> +{
>>> +	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
>>> +		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
>>> +		mb->ol_flags |= PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED;  
>>
>> Please new flag instead of PKT_RX_VLAN_PKT and please be sure flag is correctly
>> used with its new meaning.
>>
>> <...>
>>
>>> +/* TX prep functions */
>>> +uint16_t
>>> +avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
>>> +	      uint16_t nb_pkts)
>>> +{
>>> +	int i, ret;
>>> +	uint64_t ol_flags;
>>> +	struct rte_mbuf *m;
>>> +
>>> +	for (i = 0; i < nb_pkts; i++) {
>>> +		m = tx_pkts[i];
>>> +		ol_flags = m->ol_flags;
>>> +
>>> +		/* m->nb_segs is uint8_t, so nb_segs is always less than
>>> +		 * AVF_TX_MAX_SEG.
>>> +		 * We check only a condition for nb_segs > AVF_TX_MAX_MTU_SEG.
>>> +		 */  
>>
>> This is wrong, nb_segs is 16bits now, this check has been updated in i40e already.
>>
>> <...>
> 
> Most drivers base code of one of the legacy Intel drivers.
> Why not fix ixgbe (or similar) to be a "follow this model" reference?
> 
> It is unreasonable to expect new drivers to follow a different pattern.

You are right, updating existing drivers will increase the chance of new drivers
being correct at first time.

After above said, it is harder to get community driven updates for existing
drivers, but easier to ask new drivers to comply with latest libraries since
there is already a resource working on developing the driver.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
  2017-11-22 22:38       ` Ferruh Yigit
@ 2017-11-23  1:17         ` Wu, Jingjing
  0 siblings, 0 replies; 151+ messages in thread
From: Wu, Jingjing @ 2017-11-23  1:17 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Thursday, November 23, 2017 6:39 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
> 
> On 11/21/2017 11:55 PM, Wu, Jingjing wrote:
> >
> >
> >> -----Original Message-----
> >> From: Yigit, Ferruh
> >> Sent: Wednesday, November 22, 2017 8:06 AM
> >> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> >> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> >> Subject: Re: [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func
> >>
> >> On 10/20/2017 1:26 AM, Jingjing Wu wrote:
> >>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >>
> >> <...>
> >>
> >>> @@ -214,6 +214,9 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >>>  # Compile burst-oriented AVF PMD driver  #
> >>> CONFIG_RTE_LIBRTE_AVF_PMD=y
> >>> +CONFIG_RTE_LIBRTE_AVF_RX_DUMP=n
> >>> +CONFIG_RTE_LIBRTE_AVF_TX_DUMP=n
> >>
> >> Are these config options used?
> >>
> > Yes, some macros are defined in avf_rxtx.h for dump descriptors. Will merge
> them with AVF_DEBUG_TX/RX.
> >
> >> <...>
> >>
> >>> @@ -49,4 +49,18 @@ extern int avf_logtype_driver;
> >>>  	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)  #define
> >>> PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
> >>>
> >>> +#ifdef RTE_LIBRTE_AVF_DEBUG_TX
> >>
> >> Is this defined anywhere?
> > Will merge it with AVF_TX_DUMP.
> >
> >>
> >>> +#define PMD_TX_LOG(level, fmt, args...) \
> >>> +	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
> >>
> >> Instead should RTE_LOG_DP used?
> >> And since other macros uses dynamic log functions, why here use
> >> static method, what do you think using new method for data path logs as
> well?
> >>
> > This is used for fast path debug, so static macro will benefit performance.
> 
> How it will benefit?
> 
> The PMD_TX_LOG macro controlled by a specific compile time option,
> RTE_LIBRTE_AVF_DEBUG_TX. If this config is disabled the logging won't be part
> of all binary at all.
> 
> When that config option enabled, what is the difference with macro and
> dynamic debug call? Eventually both are rte_log calls. Only macro has
> dependency to RTE_LOGTYPE_xxx static definitions.
> 
I was thinking about RTE_LOG, if RTE_LOG_DP, it is fine for performance, but
It cannot distinguish which driver we are going to debug. Because there is
Only one check by RTE_LOG_DP_LEVEL, think about if there are more than one
Drivers, but we just want to debug one of them?
 
Thanks
Jingjing

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 00/14] add new avf PMD
  2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
                   ` (9 preceding siblings ...)
  2017-11-21 23:58 ` [dpdk-dev] [RFC 0/9] add new avf PMD Ferruh Yigit
@ 2017-11-24  6:33 ` Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for " Jingjing Wu
                     ` (16 more replies)
  10 siblings, 17 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports
for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens
to be an adaptive VF driver, every new drop of the VF driver would
add more and more advanced features that can be turned on in the VM
if the underlying HW device supports those advanced features. Most
importantly in a device agnostic way without ever compromising on the
base functionality. All the AVF's interface need to follow AVF spec,
and AVF compliant interface is supported start from the
Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization 
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v2 changes:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (13):
  net/avf/base: add base code for avf PMD
  net/avf: initilization of avf PMD
  net/avf: enable queue and device
  net/avf: enable basic Rx Tx func
  net/avf: enable link status update
  net/avf: enable ops to get stats
  net/avf: enable ops for MAC VLAN offload
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support

Wenzhuo Lu (1):
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   38 +
 doc/guides/nics/features/avf_vec.ini    |   38 +
 doc/guides/nics/intel_vf.rst            |   16 +-
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   63 +
 drivers/net/avf/avf.h                   |  246 +++
 drivers/net/avf/avf_ethdev.c            | 1479 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   73 +
 drivers/net/avf/avf_rxtx.c              | 1991 ++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  287 ++++
 drivers/net/avf/avf_rxtx_vec_common.h   |  238 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  680 ++++++++
 drivers/net/avf/avf_vchnl.c             |  843 ++++++++++
 drivers/net/avf/base/avf_adminq.c       | 1002 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2807 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1843 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  192 +++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  107 ++
 drivers/net/avf/base/avf_type.h         | 1990 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  786 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   64 +-
 drivers/net/i40e/i40e_ethdev.h          |    4 +
 drivers/net/i40e/i40e_pf.c              |  137 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 35 files changed, 16201 insertions(+), 25 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for avf PMD
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:50     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of " Jingjing Wu
                     ` (15 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_log.h             |   52 +
 drivers/net/avf/base/avf_adminq.c     | 1002 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2807 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1843 ++++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  192 +++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  107 ++
 drivers/net/avf/base/avf_type.h       | 1990 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  786 +++++++++
 15 files changed, 10053 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..431f0f3
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,52 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_LOGS_H_
+#define _AVF_LOGS_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ##args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOGS_H_ */
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..1e3aedc
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1002 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Writeback timeout.\n");
+		status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#else
+	ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..be32fb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,169 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+#ifdef AVF_ESS_SUPPORT
+#define AVF_ASQ_CMD_TIMEOUT_ESS	50000000  /* usecs */
+#endif
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..5b9ed38
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2807 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	__le16	lb_mode;
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	u32	reg_address;
+	u32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD	0x01
+#define AVF_AQ_NVM_FLASH_ONLY	0x80
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..d67297b
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1843 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+	cmd->reg_value = reg_val;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = cmd->reg_value;
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_offload_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_offload_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config - pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..268f97a
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,192 @@
+/******************************************************************************
+
+  Copyright (c) 2001-2015, Intel Corporation
+  All rights reserved.
+
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions are met:
+
+   1. Redistributions of source code must retain the above copyright notice,
+      this list of conditions and the following disclaimer.
+
+   2. Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in the
+      documentation and/or other materials provided with the distribution.
+
+   3. Neither the name of the Intel Corporation nor the names of its
+      contributors may be used to endorse or promote products derived from
+      this software without specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+  ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+  LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+  CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+  SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+  INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+  POSSIBILITY OF SUCH DAMAGE.
+******************************************************************************/
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..644c16d
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,107 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..36ad76d
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,1990 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT	8
+#define AVF_NVM_TRANS_MASK	(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_CON		0x0
+#define AVF_NVM_SNT		0x1
+#define AVF_NVM_LCB		0x2
+#define AVF_NVM_SA		(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA		0x4
+#define AVF_NVM_CSUM		0x8
+#define AVF_NVM_EXEC		0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_SR_EMP_MODULE_PTR			0x0F
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR		0xD
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..7524e09
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,786 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF offload flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_offload_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support; otherwise it will respond with the
+ * number requested.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table*/
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if ((valid_len != msglen) || (err_msg_format))
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of avf PMD
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for " Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:52     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device Jingjing Wu
                     ` (14 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  58 ++++
 drivers/net/avf/avf.h                   | 214 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 482 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_log.h               |   2 +-
 drivers/net/avf/avf_vchnl.c             | 336 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 9 files changed, 1102 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..ce4d9bb 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=n
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index ef09b4e..688b8ee 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -37,6 +37,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD),d)
 endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..40d0a0f
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,58 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..3d3e0dc
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,214 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];  /* queue bitmask for each vector */
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..ba31b47
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,482 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint16_t interval =
+		avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_CLEARPBA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t icr0;
+
+	avf_disable_irq0(hw);
+
+	/* read out interrupt causes */
+	icr0 = AVF_READ_REG(hw, AVFINT_ICR01);
+
+	/* No interrupt event indicated */
+	if (!(icr0 & AVFINT_ICR01_INTEVENT_MASK)) {
+		PMD_DRV_LOG(DEBUG, "No interrupt event, nothing to do");
+		goto done;
+	}
+
+	if (icr0 & AVFINT_ICR01_ADMINQ_MASK) {
+		PMD_DRV_LOG(DEBUG, "ICR01_ADMINQ is reported");
+		avf_handle_virtchnl_msg(dev);
+	}
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);;
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+	return;
+}
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index 431f0f3..25e853b 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -37,7 +37,7 @@
 extern int avf_logtype_init;
 #define PMD_INIT_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
-		__func__, ##args)
+		__func__, ## args)
 #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
 
 extern int avf_logtype_driver;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..214ddf9
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,336 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG, "adminq response is received,"
+					     " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/**
+ * avf_check_api_version
+ * @dev: pointer to eth device
+ *
+ * Check API version with sync wait until version read or fail from admin queue
+ */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if ((vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START) ||
+	    ((vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START) &&
+	     (vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START))) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if ((vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR) ||
+		   ((vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR) &&
+		    (vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR))) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	  * add advanced/optional offload capabilities
+	  */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..584c168 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -120,6 +120,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BOND)       += -lrte_pmd_bond
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for " Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of " Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04  8:45     ` Xing, Beilei
  2017-12-04 19:56     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func Jingjing Wu
                     ` (13 subsequent siblings)
  16 siblings, 2 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 356 ++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 644 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 202 ++++++++++++++
 drivers/net/avf/avf_vchnl.c  | 355 ++++++++++++++++++++++++
 6 files changed, 1576 insertions(+)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 40d0a0f..1662c76 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -54,5 +54,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 3d3e0dc..1a62ad7 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -67,6 +67,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -211,4 +218,15 @@ _atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index ba31b47..355e70b 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -60,6 +60,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -69,9 +77,356 @@ static const struct rte_pci_id pci_id_avf_map[] = {
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		avf_enable_vlan_strip(ad);
+	else
+		avf_disable_vlan_strip(ad);
+	return 0;
+}
+
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set correctly */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)) {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->nb_msix = 1;
+		vf->msix_base = AVF_MISC_VEC_ID;
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_CLEARPBA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		AVF_WRITE_FLUSH(hw);
+
+		if (avf_config_irq_map(adapter)) {
+			PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+			goto err_queue;
+		}
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -295,6 +650,7 @@ avf_dev_close(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2edd455
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,644 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    (nb_desc > AVF_MAX_RING_DESC) ||
+	    (nb_desc < AVF_MIN_RING_DESC)) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+					   ring_size, AVF_RING_BASE_ALIGN,
+					   socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	memset(mz->addr, 0, ring_size); /* Zero all the descriptors in the ring. */
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    (nb_desc > AVF_MAX_RING_DESC) ||
+	    (nb_desc < AVF_MIN_RING_DESC)) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..0247339
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,202 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint8_t port_id;        /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	uint16_t nb_used;              /* number of used desc since RS bit set */
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint8_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+
+#ifdef RTE_LIBRTE_AVF_RX_DUMP
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id);
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_TX_DUMP
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id);
+#else
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 214ddf9..857b263 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -53,6 +53,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -223,6 +224,48 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		 PMD_DRV_LOG(ERR, "Failed to execute command of "
+			     "OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		 PMD_DRV_LOG(ERR, "Failed to execute command of "
+			     "OP_DISABLE_VLAN_STRIPPING");
+
+	 return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -334,3 +377,315 @@ avf_get_vf_resource(struct avf_adapter *adapter)
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_ENABLE_QUEUES");
+		return -ENOSYS;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_DISABLE_QUEUES");
+		return -ENOSYS;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+			    "%s", on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_CONFIG_RSS_LUT");
+		err = -ENOSYS;
+	}
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_CONFIG_RSS_KEY");
+		err = -ENOSYS;
+	}
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+		err = -ENOSYS;
+	}
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (2 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:57     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update Jingjing Wu
                     ` (12 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 config/common_base           |   4 +
 drivers/net/avf/Makefile     |   3 +
 drivers/net/avf/avf_ethdev.c |  36 +-
 drivers/net/avf/avf_log.h    |  27 +-
 drivers/net/avf/avf_rxtx.c   | 789 ++++++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h   |  50 ++-
 6 files changed, 890 insertions(+), 19 deletions(-)

diff --git a/config/common_base b/config/common_base
index ce4d9bb..5a70485 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,10 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 1662c76..6193fa9 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -37,6 +37,9 @@ LIB = librte_pmd_avf.a
 
 CFLAGS += -O3
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 355e70b..eae5b65 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -68,6 +68,7 @@ static void avf_dev_stop(struct rte_eth_dev *dev);
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -82,6 +83,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -229,9 +231,12 @@ avf_init_queues(struct rte_eth_dev *dev)
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -426,6 +431,23 @@ avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -591,7 +613,19 @@ avf_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index 25e853b..1948966 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -31,8 +31,8 @@
  *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-#ifndef _AVF_LOGS_H_
-#define _AVF_LOGS_H_
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
 
 extern int avf_logtype_init;
 #define PMD_INIT_LOG(level, fmt, args...) \
@@ -49,4 +49,25 @@ extern int avf_logtype_driver;
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
-#endif /* _AVF_LOGS_H_ */
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2edd455..7d48d38 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -63,17 +63,11 @@ static inline int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -642,3 +636,780 @@ avf_stop_queues(struct rte_eth_dev *dev)
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose rx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 0247339..342b577 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -48,6 +48,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -113,6 +132,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -133,6 +164,17 @@ int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -185,17 +227,13 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       tx_desc->cmd_type_offset_bsz);
 }
 
-#ifdef RTE_LIBRTE_AVF_RX_DUMP
+#ifdef DEBUG_DUMP_DESC
 #define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
 	avf_dump_rx_descriptor(rxq, desc, rx_id);
-#else
-#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_AVF_TX_DUMP
 #define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
 	avf_dump_tx_descriptor(txq, desc, tx_id);
 #else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
 #define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
 #endif
 
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (3 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:58     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats Jingjing Wu
                     ` (11 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf.h        |  2 ++
 drivers/net/avf/avf_ethdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c  | 38 ++++++++++++++++++++++++++++++++++-
 3 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 1a62ad7..cbc6108 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -229,4 +229,6 @@ int avf_configure_rss_key(struct avf_adapter *adapter);
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index eae5b65..d02fc18 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -84,6 +84,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -448,6 +449,53 @@ avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&(dev->data->dev_link),
+			    *(uint64_t *)&(dev->data->dev_link),
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 857b263..e6cc18e 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -161,6 +161,41 @@ avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -200,7 +235,8 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (4 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:58     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload Jingjing Wu
                     ` (10 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf.h        |  2 ++
 drivers/net/avf/avf_ethdev.c | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c  | 26 ++++++++++++++++++++++++++
 3 files changed, 55 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index cbc6108..88858ed 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -231,4 +231,6 @@ int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d02fc18..9087854 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -69,6 +69,8 @@ static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -85,6 +87,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -497,6 +500,30 @@ avf_dev_link_update(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e6cc18e..eca33e7 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -725,3 +725,29 @@ avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (5 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 19:59     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 08/14] net/avf: enable ops for RSS setting Jingjing Wu
                     ` (9 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf.h        |   5 +
 drivers/net/avf/avf_ethdev.c | 215 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c  |  90 ++++++++++++++++++
 3 files changed, 310 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 88858ed..f39bebc 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -233,4 +233,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 9087854..9cf1cfd 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -71,6 +71,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+			    struct ether_addr *addr,
+			    uint32_t index,
+			    uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -88,6 +102,14 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -96,6 +118,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -499,6 +522,198 @@ avf_dev_link_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC: %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC: %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index eca33e7..07b7322 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -725,6 +725,7 @@ avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
 int
 avf_query_stats(struct avf_adapter *adapter,
 		struct virtchnl_eth_stats **pstats)
@@ -751,3 +752,92 @@ avf_query_stats(struct avf_adapter *adapter,
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		      bool enable_unicast,
+		      bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) + \
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 08/14] net/avf: enable ops for RSS setting
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (6 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 09/14] net/avf: enable ops for MTU setting Jingjing Wu
                     ` (8 subsequent siblings)
  16 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_ethdev.c | 142 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 142 insertions(+)

diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 9cf1cfd..170317d 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -83,6 +83,16 @@ static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -119,6 +129,10 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -671,6 +685,134 @@ avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -EIO;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 09/14] net/avf: enable ops for MTU setting
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (7 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 08/14] net/avf: enable ops for RSS setting Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 10/14] net/avf: enable ops to check queue info and status Jingjing Wu
                     ` (7 subsequent siblings)
  16 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_ethdev.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 170317d..d257d2a 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -93,6 +93,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -133,6 +134,7 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -813,6 +815,34 @@ avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if ((mtu < ETHER_MIN_MTU) || (frame_size > AVF_FRAME_SIZE_MAX))
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 10/14] net/avf: enable ops to check queue info and status
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (8 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 09/14] net/avf: enable ops for MTU setting Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface Jingjing Wu
                     ` (6 subsequent siblings)
  16 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/avf_ethdev.c |   5 ++
 drivers/net/avf/avf_rxtx.c   | 120 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   7 +++
 3 files changed, 132 insertions(+)

diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d257d2a..8f382ff 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -134,6 +134,11 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 7d48d38..8e79efd 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1413,3 +1413,123 @@ avf_set_tx_function(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		 struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
+	while ((desc < rxq->nb_rx_desc) &&
+		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+				(1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 342b577..8e1025c 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -175,6 +175,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (9 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 10/14] net/avf: enable ops to check queue info and status Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 20:04     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func Jingjing Wu
                     ` (5 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base             |   2 +-
 drivers/net/i40e/i40e_ethdev.c |  64 +++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   4 ++
 drivers/net/i40e/i40e_pf.c     | 137 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 5 files changed, 189 insertions(+), 24 deletions(-)

diff --git a/config/common_base b/config/common_base
index 5a70485..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -228,7 +228,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 #
 # Compile burst-oriented AVF PMD driver
 #
-CONFIG_RTE_LIBRTE_AVF_PMD=n
+CONFIG_RTE_LIBRTE_AVF_PMD=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 811cc9f..d64bfc1 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3678,6 +3678,7 @@ i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3694,14 +3695,21 @@ i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++)
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3725,8 +3733,16 @@ i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6698,17 +6714,20 @@ i40e_pf_disable_rss(struct i40e_pf *pf)
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6725,8 +6744,18 @@ i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6738,6 +6767,7 @@ i40e_get_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t *key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6754,8 +6784,20 @@ i40e_get_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t *key_len)
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+				*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+					   sizeof(uint32_t);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+				*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+					   sizeof(uint32_t);
+		}
 	}
 	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd67453..79a8fc4 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -426,6 +426,8 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	struct virtchnl_version_info version; /* version of the virtchnl from VF */
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1198,6 +1200,8 @@ int i40e_dcb_init_configure(struct rte_eth_dev *dev, bool sw_dcb);
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 94bb0cf..faab276 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -273,19 +273,23 @@ i40e_pf_host_send_msg_to_vf(struct i40e_pf_vf *vf,
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -309,11 +313,13 @@ i40e_pf_host_process_cmd_reset_vf(struct i40e_pf_vf *vf)
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -337,11 +343,31 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default, doesn't support hena setting by virtchnnl yet. */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR without binding queue to interrupt vector.*/
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1090,6 +1116,85 @@ i40e_pf_host_process_cmd_disable_vlan_strip(struct i40e_pf_vf *vf, bool b_op)
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+					uint8_t *msg,
+					uint16_t msglen,
+					bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (msg == NULL || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+					uint8_t *msg,
+					uint16_t msglen,
+					bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (msg == NULL || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1196,7 +1301,7 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1204,7 +1309,7 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1265,6 +1370,14 @@ i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev,
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 0411663..196d71e 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -37,6 +37,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (10 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 20:01     ` Ferruh Yigit
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 13/14] net/avf: enable bulk allocate Rx func Jingjing Wu
                     ` (4 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 drivers/net/avf/Makefile              |   1 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 178 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  34 ++
 drivers/net/avf/avf_rxtx_vec_common.h | 238 ++++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 680 ++++++++++++++++++++++++++++++++++
 8 files changed, 1136 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..cdb8735 100644
--- a/config/common_base
+++ b/config/common_base
@@ -233,6 +233,7 @@ CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 6193fa9..ff9d523 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -58,5 +58,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index f39bebc..d4c275e 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -146,6 +146,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 8f382ff..1e8d9c0 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -149,6 +149,17 @@ avf_dev_configure(struct rte_eth_dev *dev)
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
 		avf_enable_vlan_strip(ad);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 8e79efd..079d49b 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -121,6 +121,38 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx"
+				    " can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx"
+			    " cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if (((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS) &&
+	    (txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST) &&
+	    (txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF)) {
+		PMD_INIT_LOG(DEBUG, "Vector tx"
+			     " can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx"
+			    " cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -254,6 +286,14 @@ release_txq_mbufs(struct avf_tx_queue *txq)
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -353,7 +393,12 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -365,6 +410,8 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -444,6 +491,12 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -542,7 +595,7 @@ avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -570,7 +623,7 @@ avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -585,7 +638,7 @@ avf_dev_rx_queue_release(void *rxq)
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -599,7 +652,7 @@ avf_dev_tx_queue_release(void *txq)
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -623,7 +676,7 @@ avf_stop_queues(struct rte_eth_dev *dev)
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -631,7 +684,7 @@ avf_stop_queues(struct rte_eth_dev *dev)
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1348,6 +1401,28 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx],
+						num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1400,18 +1475,64 @@ avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
-/* choose rx function*/
+/* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1533,3 +1654,38 @@ avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(void __rte_unused *rx_queue,
+		  struct rte_mbuf __rte_unused **rx_pkts,
+		  uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(void __rte_unused *rx_queue,
+	struct rte_mbuf __rte_unused **rx_pkts,
+	uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(void __rte_unused * tx_queue,
+			  struct rte_mbuf __rte_unused **tx_pkts,
+			  uint16_t __rte_unused nb_pkts)
+{
+	return 0;
+}
+int __attribute__((weak))
+avf_rxq_vec_setup(struct avf_rx_queue __rte_unused *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(struct avf_tx_queue __rte_unused *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 8e1025c..0246a73 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -45,6 +45,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -74,6 +83,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -90,6 +107,11 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
 	uint8_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
@@ -99,6 +121,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -130,6 +153,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -183,6 +207,16 @@ uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..726b68e
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,238 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end != NULL) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len)
+					end->data_len -= rxq->crc_len;
+				else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m != NULL)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned mask = rxq->nb_rx_desc - 1;
+	unsigned i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i] != NULL)
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..dce55a1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,680 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+				 pkt_mb4);
+		_mm_storeu_si128((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+				 pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128((void *)&rx_pkts[pos+1]->rx_descriptor_fields1,
+				 pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (rxq->pkt_first_seg == NULL &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	unsigned i = 0;
+
+	if (rxq->pkt_first_seg == NULL) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			((uint64_t)pkt->data_len << AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+    uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 13/14] net/avf: enable bulk allocate Rx func
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (11 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support Jingjing Wu
                     ` (3 subsequent siblings)
  16 siblings, 0 replies; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index d4c275e..3885e4a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -147,6 +147,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 1e8d9c0..36ffa1a 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -149,6 +149,7 @@ avf_dev_configure(struct rte_eth_dev *dev)
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 079d49b..038d85f 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -153,6 +153,27 @@ check_tx_vec_allow(struct avf_tx_queue *txq)
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -171,6 +192,11 @@ reset_rx_queue(struct avf_rx_queue *rxq)
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -266,6 +292,17 @@ release_rxq_mbufs(struct avf_rx_queue *rxq)
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -395,6 +432,19 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE)
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1068,6 +1118,252 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail regsiter */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1500,6 +1796,10 @@ avf_set_rx_function(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			     dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 0246a73..0878286 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -112,6 +112,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint8_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (12 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 13/14] net/avf: enable bulk allocate Rx func Jingjing Wu
@ 2017-11-24  6:33   ` Jingjing Wu
  2017-12-04 20:02     ` Ferruh Yigit
  2017-12-04 19:48   ` [dpdk-dev] [PATCH v2 00/14] add new avf PMD Ferruh Yigit
                     ` (2 subsequent siblings)
  16 siblings, 1 reply; 151+ messages in thread
From: Jingjing Wu @ 2017-11-24  6:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, wenzhuo.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                          |   6 ++
 doc/guides/nics/features/avf.ini     |  38 +++++++
 doc/guides/nics/features/avf_vec.ini |  38 +++++++
 doc/guides/nics/intel_vf.rst         |  16 ++-
 drivers/net/avf/avf_ethdev.c         | 190 +++++++++++++++++++++++++++++------
 5 files changed, 255 insertions(+), 33 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index f0baeb4..6692f2a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,12 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..88d5873
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,38 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+Hash filter          = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..74656ff
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,38 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+Hash filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..3adb684 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,18 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 36ffa1a..7c6a1c4 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -96,9 +96,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -140,6 +145,8 @@ static const struct eth_dev_ops avf_eth_dev_ops = {
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -300,6 +307,88 @@ avf_init_queues(struct rte_eth_dev *dev)
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+					   struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) &&
+	    dev->data->dev_conf.intr_conf.rxq == 0) {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for descriptor
+		 * write back.
+		 */
+		vf->nb_msix = 1;
+		vf->msix_base = AVF_MISC_VEC_ID;
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+
+		PMD_DRV_LOG(DEBUG, "vector 0 are mapping to all Rx queues");
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_CLEARPBA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		AVF_WRITE_FLUSH(hw);
+	} else if (dev->data->dev_conf.intr_conf.rxq) {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG, "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG, "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -339,8 +428,6 @@ avf_dev_start(struct rte_eth_dev *dev)
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -350,8 +437,6 @@ avf_dev_start(struct rte_eth_dev *dev)
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -369,29 +454,14 @@ avf_dev_start(struct rte_eth_dev *dev)
 		goto err_queue;
 	}
 
-	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)) {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->nb_msix = 1;
-		vf->msix_base = AVF_MISC_VEC_ID;
-		for (i = 0; i < dev->data->nb_rx_queues; i++)
-			vf->rxq_map[0] |= 1 << i;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      AVFINT_DYN_CTL01_CLEARPBA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-		AVF_WRITE_FLUSH(hw);
-
-		if (avf_config_irq_map(adapter)) {
-			PMD_DRV_LOG(ERR, "config interrupt mapping failed");
-			goto err_queue;
-		}
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
+		goto err_queue;
+	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
 	}
 
 	/* Set all mac addrs */
@@ -402,7 +472,6 @@ avf_dev_start(struct rte_eth_dev *dev)
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -418,6 +487,8 @@ avf_dev_stop(struct rte_eth_dev *dev)
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -427,9 +498,13 @@ avf_dev_stop(struct rte_eth_dev *dev)
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -928,6 +1003,59 @@ avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_CLEARPBA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_CLEARPBA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
2.4.11

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device Jingjing Wu
@ 2017-12-04  8:45     ` Xing, Beilei
  2017-12-04 19:56     ` Ferruh Yigit
  1 sibling, 0 replies; 151+ messages in thread
From: Xing, Beilei @ 2017-12-04  8:45 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Wu, Jingjing, Lu, Wenzhuo



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jingjing Wu
> Sent: Friday, November 24, 2017 2:33 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device
> 
> enable device and queue setup ops like:
> 
>  - dev_configure
>  - dev_start
>  - dev_stop
>  - dev_close
>  - dev_infos_get
>  - rx_queue_start
>  - rx_queue_stop
>  - tx_queue_start
>  - tx_queue_stop
>  - rx_queue_setup
>  - rx_queue_release
>  - tx_queue_setup
>  - tx_queue_release
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  drivers/net/avf/Makefile     |   1 +
>  drivers/net/avf/avf.h        |  18 ++
>  drivers/net/avf/avf_ethdev.c | 356 ++++++++++++++++++++++++
>  drivers/net/avf/avf_rxtx.c   | 644
> +++++++++++++++++++++++++++++++++++++++++++
>  drivers/net/avf/avf_rxtx.h   | 202 ++++++++++++++
>  drivers/net/avf/avf_vchnl.c  | 355 ++++++++++++++++++++++++
>  6 files changed, 1576 insertions(+)
>  create mode 100644 drivers/net/avf/avf_rxtx.c
>  create mode 100644 drivers/net/avf/avf_rxtx.h
> 



>  static const struct eth_dev_ops avf_eth_dev_ops = {
> +	.dev_configure              = avf_dev_configure,
> +	.dev_start                  = avf_dev_start,
> +	.dev_stop                   = avf_dev_stop,
> +	.dev_close                  = avf_dev_close,
> +	.dev_infos_get              = avf_dev_info_get,
> +	.rx_queue_start             = avf_dev_rx_queue_start,
> +	.rx_queue_stop              = avf_dev_rx_queue_stop,
> +	.tx_queue_start             = avf_dev_tx_queue_start,
> +	.tx_queue_stop              = avf_dev_tx_queue_stop,
> +	.rx_queue_setup             = avf_dev_rx_queue_setup,
> +	.rx_queue_release           = avf_dev_rx_queue_release,
> +	.tx_queue_setup             = avf_dev_tx_queue_setup,
> +	.tx_queue_release           = avf_dev_tx_queue_release,
>  };
> 
>  static int
> +avf_dev_configure(struct rte_eth_dev *dev)
> +{
> +	struct avf_adapter *ad =
> +		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> +
> +	/* Vlan stripping setting */
> +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> +		avf_enable_vlan_strip(ad);
> +	else
> +		avf_disable_vlan_strip(ad);

Better to check PF capability first before setting VLAN offload.

> +	return 0;
> +}
> +
> +

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/14] add new avf PMD
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (13 preceding siblings ...)
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support Jingjing Wu
@ 2017-12-04 19:48   ` Ferruh Yigit
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
  16 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:48 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Adaptive Virtual Function (AVF) Driver is VF driver which supports
> for all future Intel devices without requiring a VM update.
> It promises the basic high speed connectivity. And since this happens
> to be an adaptive VF driver, every new drop of the VF driver would
> add more and more advanced features that can be turned on in the VM
> if the underlying HW device supports those advanced features. Most
> importantly in a device agnostic way without ever compromising on the
> base functionality. All the AVF's interface need to follow AVF spec,
> and AVF compliant interface is supported start from the
> Intel® Ethernet Controller 710 Series.
> 
> This patch set adds AVF PMD supporting.
>  - Device initialization 
>  - Queue setup and Device start
>  - Basic Rx and Tx.
>  - MAC address offload feature
>  - Vlan offload feature
>  - RSS offload feature
>  - Vectored Rx and Tx func
>  - Bulk allocate Rx func
>  - Rx interrupt support
>  - Statistics query
> 
> v2 changes:
>  - rebase to 17.11
>  - add vectored Rx and Tx func
>  - add bulk allocate Rx func
>  - add Rx interrupt support
>  - add statistics query
>  - fix coding style issue
>  - remove extra compile flags in Makefile
>  - add doc to list avf PMD features
>  - fix lut setting when rss is disabled
>  - fix log init missing
>  - remove rx_descriptor_done
> 
> Jingjing Wu (13):
>   net/avf/base: add base code for avf PMD
>   net/avf: initilization of avf PMD
>   net/avf: enable queue and device
>   net/avf: enable basic Rx Tx func
>   net/avf: enable link status update
>   net/avf: enable ops to get stats
>   net/avf: enable ops for MAC VLAN offload
>   net/avf: enable ops for RSS setting
>   net/avf: enable ops for MTU setting
>   net/avf: enable ops to check queue info and status
>   net/i40e: support AVF basic interface
>   net/avf: enable sse vector Rx Tx func
>   net/avf: enable Rx interrupt support
> 
> Wenzhuo Lu (1):
>   net/avf: enable bulk allocate Rx func

Overall, there are shared build errors, and there are checkpatch warnings, can
you please check them?

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for avf PMD
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for " Jingjing Wu
@ 2017-12-04 19:50     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:50 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> --- /dev/null
> +++ b/drivers/net/avf/avf_log.h
> @@ -0,0 +1,52 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */

Can you please switch to SPDX license identifiers [1] ? For all files in the
patchset.

[1]
http://dpdk.org/dev/patchwork/patch/31857/

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of avf PMD
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of " Jingjing Wu
@ 2017-12-04 19:52     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:52 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> @@ -37,6 +37,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD),d)
>  endif
>  
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
> +DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf

Can you please add this alphabetically?

<...>

> +OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c)))
> +$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))

There is no CFLAGS_BASE_DRIVER yet, you can add these lines when some
CFLAGS_BASE_DRIVER added.

<...>

> @@ -37,7 +37,7 @@
>  extern int avf_logtype_init;
>  #define PMD_INIT_LOG(level, fmt, args...) \
>  	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
> -		__func__, ##args)
> +		__func__, ## args)

Can you please squash this one to previous patch?

<...>

> +/**
> + * avf_check_api_version
> + * @dev: pointer to eth device

There is no "dev" parameter to the API.

<...>

> @@ -120,6 +120,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
>  _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf

Can you please add this alphabetically?

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device Jingjing Wu
  2017-12-04  8:45     ` Xing, Beilei
@ 2017-12-04 19:56     ` Ferruh Yigit
  1 sibling, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:56 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> enable device and queue setup ops like:
> 
>  - dev_configure
>  - dev_start
>  - dev_stop
>  - dev_close
>  - dev_infos_get
>  - rx_queue_start
>  - rx_queue_stop
>  - tx_queue_start
>  - tx_queue_stop
>  - rx_queue_setup
>  - rx_queue_release
>  - tx_queue_setup
>  - tx_queue_release
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> +/* HW desc structure, both 16-byte and 32-byte types are supported */
> +#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC

Do you want to add this config option in this patch?

<...>

> +/* Structure associated with each Rx queue. */
> +struct avf_rx_queue {
> +	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
> +	const struct rte_memzone *mz; /* memzone for Rx ring */
> +	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
> +	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
> +	struct rte_mbuf **sw_ring;     /* address of SW ring */
> +	uint16_t nb_rx_desc;          /* ring length */
> +	uint16_t rx_tail;             /* current value of tail */
> +	volatile uint8_t *qrx_tail;   /* register address of tail */
> +	uint16_t rx_free_thresh;      /* max free RX desc to hold */
> +	uint16_t nb_rx_hold;          /* number of held free RX desc */
> +	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
> +	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
> +	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
> +
> +	uint8_t port_id;        /* device port ID */

If this is ethdev port_id, this needs to be 16bits now.

<...>

> +/* Structure associated with each TX queue. */
> +struct avf_tx_queue {
> +	const struct rte_memzone *mz;  /* memzone for Tx ring */
> +	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
> +	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
> +	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
> +	uint16_t nb_tx_desc;           /* ring length */
> +	uint16_t tx_tail;              /* current value of tail */
> +	volatile uint8_t *qtx_tail;    /* register address of tail */
> +	uint16_t nb_used;              /* number of used desc since RS bit set */
> +	uint16_t nb_free;
> +	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
> +	uint16_t free_thresh;
> +	uint16_t rs_thresh;
> +
> +	uint8_t port_id;

Same here.

<...>

> +
> +#ifdef RTE_LIBRTE_AVF_RX_DUMP
> +#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
> +	avf_dump_rx_descriptor(rxq, desc, rx_id);
> +#else
> +#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_AVF_TX_DUMP
> +#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
> +	avf_dump_tx_descriptor(txq, desc, tx_id);
> +#else
> +#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
> +#endif

These are not defined anywhere and will be replaced in next patch, so why not
completely removed in this patch, and add correct one in next patch?

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func Jingjing Wu
@ 2017-12-04 19:57     ` Ferruh Yigit
  2017-12-27  3:07       ` Wu, Jingjing
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:57 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

<...>

> @@ -31,8 +31,8 @@
>   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>   */
>  
> -#ifndef _AVF_LOGS_H_
> -#define _AVF_LOGS_H_
> +#ifndef _AVF_LOG_H_
> +#define _AVF_LOG_H_

Can you please squash this one with patch 1/14 ?

<...>

> @@ -185,17 +227,13 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
>  	       tx_desc->cmd_type_offset_bsz);
>  }
>  
> -#ifdef RTE_LIBRTE_AVF_RX_DUMP
> +#ifdef DEBUG_DUMP_DESC
>  #define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
>  	avf_dump_rx_descriptor(rxq, desc, rx_id);
> -#else
> -#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
> -#endif
> -
> -#ifdef RTE_LIBRTE_AVF_TX_DUMP
>  #define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
>  	avf_dump_tx_descriptor(txq, desc, tx_id);
>  #else
> +#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
>  #define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
>  #endif

Although we are trying to add as less as compile time config, this has been
defined in Makefile with compiled out also seems not good option, what do you
think converting this into another config option?

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update Jingjing Wu
@ 2017-12-04 19:58     ` Ferruh Yigit
  2017-12-27  3:07       ` Wu, Jingjing
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:58 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  drivers/net/avf/avf.h        |  2 ++
>  drivers/net/avf/avf_ethdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
>  drivers/net/avf/avf_vchnl.c  | 38 ++++++++++++++++++++++++++++++++++-

Can you please update doc/guides/nics/features/avf.ini in each patch per feature
added?

Same is valid for all further feature patches.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats Jingjing Wu
@ 2017-12-04 19:58     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:58 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:

Title can be "net/avf: support stats"?

> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload Jingjing Wu
@ 2017-12-04 19:59     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 19:59 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
>  - promiscuous_enable
>  - promiscuous_disable
>  - allmulticast_enable
>  - allmulticast_disable
>  - mac_addr_add
>  - mac_addr_remove
>  - mac_addr_set
>  - vlan_filter_set
>  - vlan_offload_set

Patch title is misleading.

> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<....>

> +static int
> +avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
> +{
> +	struct avf_adapter *adapter =
> +		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
> +	int err;
> +
> +	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
> +		return -ENOTSUP;
> +
> +	err = avf_add_del_vlan(adapter, vlan_id, on);
> +	if (err)
> +		return -EIO;

Compiler complains about missing return.

> +}
> +
> +static int
> +avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
> +{
> +	struct avf_adapter *adapter =
> +		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
> +	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> +	int err;
> +
> +	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
> +		return -ENOTSUP;
> +
> +	/* Vlan stripping setting */
> +	if (mask & ETH_VLAN_STRIP_MASK) {
> +		/* Enable or disable VLAN stripping */
> +		if (dev_conf->rxmode.hw_vlan_strip)
> +			err = avf_enable_vlan_strip(adapter);
> +		else
> +			err = avf_disable_vlan_strip(adapter);
> +	}
> +
> +	if (err)
> +		return -EIO;

Same here, missing return.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func Jingjing Wu
@ 2017-12-04 20:01     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 20:01 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> @@ -233,6 +233,7 @@ CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
>  CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
>  CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
>  CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
> +CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y

Can you please move this just below CONFIG_RTE_LIBRTE_AVF_PMD, since this
enable/disable vector PMD more important than debug configs.

<...>

> +#ifdef RTE_LIBRTE_AVF_INC_VECTOR
> +static inline bool
> +check_rx_vec_allow(struct avf_rx_queue *rxq)
> +{
> +	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
> +	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
> +		PMD_INIT_LOG(DEBUG, "Vector Rx"
> +				    " can be enabled on this rxq.");
> +		return TRUE;
> +	}
> +
> +	PMD_INIT_LOG(DEBUG, "Vector Rx"
> +			    " cannot be enabled on this rxq.");

Can merge these two lines.

<...>

>  
> -/* choose rx function*/
> +/* choose tx function*/

Can you please fix this when added in patch 4/14?

<...>

> +					rte_mempool_put_bulk(free[0]->pool,
> +							     (void *)free,

Is void * cast required?

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support Jingjing Wu
@ 2017-12-04 20:02     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 20:02 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  MAINTAINERS                          |   6 ++

Maintainers file update can be part of first patch.

>  doc/guides/nics/features/avf.ini     |  38 +++++++
>  doc/guides/nics/features/avf_vec.ini |  38 +++++++
>  doc/guides/nics/intel_vf.rst         |  16 ++-

The patch title is misleading for documentation update, can you please separate
doc updates into another patch. And distribute the .ini file updates to the
patches that actually adds the feature.

Also can you please add a release notes updates to announce the PMD?

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface
  2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface Jingjing Wu
@ 2017-12-04 20:04     ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2017-12-04 20:04 UTC (permalink / raw)
  To: Jingjing Wu, dev; +Cc: wenzhuo.lu

On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> Enable Virtchnl offload Caps negotiation and RSS_PF offload
> to support AVF basic interface.
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  config/common_base             |   2 +-
>  drivers/net/i40e/i40e_ethdev.c |  64 +++++++++++++++----
>  drivers/net/i40e/i40e_ethdev.h |   4 ++
>  drivers/net/i40e/i40e_pf.c     | 137 +++++++++++++++++++++++++++++++++++++----
>  drivers/net/i40e/i40e_pf.h     |   6 ++

This is a i40e patch that enables AVF support, right?

I believe it would be better to separate the config change that enables the AVF
PMD by default from i40e changes patches.

<...>

> @@ -3694,14 +3695,21 @@ i40e_get_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
>  		uint32_t *lut_dw = (uint32_t *)lut;
>  		uint16_t i, lut_size_dw = lut_size / 4;
>  
> -		for (i = 0; i < lut_size_dw; i++)
> -			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
> +		if (vsi->type == I40E_VSI_SRIOV) {
> +			for (i = 0; i <= lut_size_dw; i++)
> +				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
> +				lut_dw[i] = i40e_read_rx_ctl(hw, reg);

this assignment is out of "for" loop.

> +		} else {
> +			for (i = 0; i < lut_size_dw; i++)
> +				lut_dw[i] = I40E_READ_REG(hw,
> +							  I40E_PFQF_HLUT(i));
> +		}
>  	}
>  
>  	return 0;
>  }
>  

<...>

> @@ -6754,8 +6784,20 @@ i40e_get_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t *key_len)
>  		uint32_t *key_dw = (uint32_t *)key;
>  		uint16_t i;
>  
> -		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
> -			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
> +		if (vsi->type == I40E_VSI_SRIOV) {
> +			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
> +				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
> +				key_dw[i] = i40e_read_rx_ctl(hw, reg);

This line is not part of the for loop, seems which is the intention, and all
credits goes to compiler to figuring this out.

> +				*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
> +					   sizeof(uint32_t);
> +
> +		} else {
> +			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
> +				reg = I40E_PFQF_HKEY(i);
> +				key_dw[i] = i40e_read_rx_ctl(hw, reg);

Same problem here, "for" scope is wrong.

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func
  2017-12-04 19:57     ` Ferruh Yigit
@ 2017-12-27  3:07       ` Wu, Jingjing
  0 siblings, 0 replies; 151+ messages in thread
From: Wu, Jingjing @ 2017-12-27  3:07 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 5, 2017 3:58 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func
> 
> On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> 
> <...>
> 
> > @@ -31,8 +31,8 @@
> >   *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> >   */
> >
> > -#ifndef _AVF_LOGS_H_
> > -#define _AVF_LOGS_H_
> > +#ifndef _AVF_LOG_H_
> > +#define _AVF_LOG_H_
> 
> Can you please squash this one with patch 1/14 ?
> 
Sure, will reorganize the patch set.

> <...>
> 
> > @@ -185,17 +227,13 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue
> *txq,
> >  	       tx_desc->cmd_type_offset_bsz);
> >  }
> >
> > -#ifdef RTE_LIBRTE_AVF_RX_DUMP
> > +#ifdef DEBUG_DUMP_DESC
> >  #define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
> >  	avf_dump_rx_descriptor(rxq, desc, rx_id);
> > -#else
> > -#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
> > -#endif
> > -
> > -#ifdef RTE_LIBRTE_AVF_TX_DUMP
> >  #define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
> >  	avf_dump_tx_descriptor(txq, desc, tx_id);
> >  #else
> > +#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
> >  #define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
> >  #endif
> 
> Although we are trying to add as less as compile time config, this has been
> defined in Makefile with compiled out also seems not good option, what do you
> think converting this into another config option?
> 

The MACROs are defined for internal debugging, and didn't use rte_log as log option.
So just add it in Makefile.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update
  2017-12-04 19:58     ` Ferruh Yigit
@ 2017-12-27  3:07       ` Wu, Jingjing
  0 siblings, 0 replies; 151+ messages in thread
From: Wu, Jingjing @ 2017-12-27  3:07 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Lu, Wenzhuo



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Tuesday, December 5, 2017 3:58 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update
> 
> On 11/23/2017 10:33 PM, Jingjing Wu wrote:
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  drivers/net/avf/avf.h        |  2 ++
> >  drivers/net/avf/avf_ethdev.c | 48
> ++++++++++++++++++++++++++++++++++++++++++++
> >  drivers/net/avf/avf_vchnl.c  | 38 ++++++++++++++++++++++++++++++++++-
> 
> Can you please update doc/guides/nics/features/avf.ini in each patch per feature
> added?
> 
> Same is valid for all further feature patches.
> 
OK, will do

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 00/15] add new avf PMD
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (14 preceding siblings ...)
  2017-12-04 19:48   ` [dpdk-dev] [PATCH v2 00/14] add new avf PMD Ferruh Yigit
@ 2018-01-04  5:27   ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 01/15] net/avf/base: add base code for " Wenzhuo Lu
                       ` (14 more replies)
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
  16 siblings, 15 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Adaptive Virtual Function (AVF) Driver is VF driver which
supports for all future Intel devices without requiring a
VM update.
It promises the basic high speed connectivity. And since
this happens to be an adaptive VF driver, every new drop
of the VF driver would add more and more advanced features
that can be turned on in the VM if the underlying HW
device supports those advanced features. Most importantly
in a device agnostic way without ever compromising on the
base functionality. All the AVF's interface need to follow
AVF spec, and AVF compliant interface is supported start
from the Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v3:
 - change the license announcement.
 - update the related document.
 - resolve the checkpatch error, warning and some check.
 - handle the comments from the community.

v2:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (13):
  net/avf/base: add base code for avf PMD
  net/avf: initialization of avf PMD
  net/avf: enable queue and device
  net/avf: enable link status update
  net/avf: support stats
  net/avf: enable ops for MAC VLAN offload
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support
  doc: update doc for avf driver

Wenzhuo Lu (2):
  net/avf: enable basic Rx Tx func
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   37 +
 doc/guides/nics/features/avf_vec.ini    |   37 +
 doc/guides/nics/intel_vf.rst            |   16 +-
 doc/guides/rel_notes/release_18_02.rst  |   16 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   36 +
 drivers/net/avf/avf.h                   |  219 +++
 drivers/net/avf/avf_ethdev.c            | 1451 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   44 +
 drivers/net/avf/avf_rxtx.c              | 1959 +++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  260 +++
 drivers/net/avf/avf_rxtx_vec_common.h   |  210 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  656 ++++++++
 drivers/net/avf/avf_vchnl.c             |  812 +++++++++
 drivers/net/avf/base/avf_adminq.c       | 1002 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2807 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1843 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  164 ++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  107 ++
 drivers/net/avf/base/avf_type.h         | 1990 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  786 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   69 +-
 drivers/net/i40e/i40e_ethdev.h          |    5 +
 drivers/net/i40e/i40e_pf.c              |  140 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 36 files changed, 15941 insertions(+), 27 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 01/15] net/avf/base: add base code for avf PMD
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 02/15] net/avf: initialization of " Wenzhuo Lu
                       ` (13 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                           |    5 +
 drivers/net/avf/avf_log.h             |   23 +
 drivers/net/avf/base/avf_adminq.c     | 1002 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  169 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2807 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1843 ++++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  164 ++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  107 ++
 drivers/net/avf/base/avf_type.h       | 1990 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  786 +++++++++
 16 files changed, 10001 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a2c2fb..b8b5e61 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -425,6 +425,11 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..e3f106b
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..1e3aedc
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1002 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Writeback timeout.\n");
+		status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#else
+	ntu = (rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK);
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..be32fb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,169 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+#ifdef AVF_ESS_SUPPORT
+#define AVF_ASQ_CMD_TIMEOUT_ESS	50000000  /* usecs */
+#endif
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..5b9ed38
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2807 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	__le16	lb_mode;
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	u32	reg_address;
+	u32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD	0x01
+#define AVF_AQ_NVM_FLASH_ONLY	0x80
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..d67297b
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1843 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+	cmd->reg_value = reg_val;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = reg_addr;
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = cmd->reg_value;
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_offload_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_offload_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config - pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..2f46bb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..644c16d
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,107 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..36ad76d
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,1990 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT	8
+#define AVF_NVM_TRANS_MASK	(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_CON		0x0
+#define AVF_NVM_SNT		0x1
+#define AVF_NVM_LCB		0x2
+#define AVF_NVM_SA		(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA		0x4
+#define AVF_NVM_CSUM		0x8
+#define AVF_NVM_EXEC		0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_SR_EMP_MODULE_PTR			0x0F
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR		0xD
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..7524e09
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,786 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF offload flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_offload_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support; otherwise it will respond with the
+ * number requested.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table*/
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if ((valid_len != msglen) || (err_msg_format))
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 02/15] net/avf: initialization of avf PMD
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 01/15] net/avf/base: add base code for " Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 03/15] net/avf: enable queue and device Wenzhuo Lu
                       ` (12 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  31 +++
 drivers/net/avf/avf.h                   | 187 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 968 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..ce4d9bb 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=n
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index ef09b4e..7696b3f 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -38,6 +38,7 @@ endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..fb520ea
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..4694cc5
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	/* queue bitmask for each vector */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..3a64c88
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,435 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	avf_disable_irq0(hw);
+
+	avf_handle_virtchnl_msg(dev);
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..ebbee31
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG,
+					    "adminq response is received,"
+					    " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/* Check API version with sync wait until version read from admin queue */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if (vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START ||
+	    (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START &&
+	     vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if (vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR ||
+		   (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR &&
+		    vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR)) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	 * add advanced/optional offload capabilities
+	 */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..78f23c5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -119,6 +119,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 03/15] net/avf: enable queue and device
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 01/15] net/avf/base: add base code for " Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 02/15] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
                       ` (11 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 366 +++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 616 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 160 +++++++++++
 drivers/net/avf/avf_vchnl.c  | 359 ++++++++++++++++++++++++-
 6 files changed, 1518 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index fb520ea..f4f7414 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -27,5 +27,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 4694cc5..22886d4 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -38,6 +38,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -184,4 +191,15 @@ struct avf_cmd_info {
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 3a64c88..605c3c4 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -31,6 +31,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -40,9 +48,366 @@
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			avf_enable_vlan_strip(ad);
+		else
+			avf_disable_vlan_strip(ad);
+	}
+	return 0;
+}
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	/* Map interrupt for writeback */
+	vf->nb_msix = 1;
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+		/* If WB_ON_ITR supports, enable it */
+		vf->msix_base = AVF_RX_VEC_START;
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+	} else {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->msix_base = AVF_MISC_VEC_ID;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+	}
+	AVF_WRITE_FLUSH(hw);
+	/* map all queues to the same interrupt */
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		vf->rxq_map[0] |= 1 << i;
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		goto err_queue;
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -250,6 +615,7 @@
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2d4fb4c
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,616 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..e227cd1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index ebbee31..55a425a 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -25,6 +25,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -196,6 +197,48 @@
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_DISABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -274,8 +317,8 @@
 	err = avf_execute_vf_cmd(adapter, &args);
 
 	if (err) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of "
-				 "OP_GET_VF_RESOURCE");
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_GET_VF_RESOURCE");
 		return -1;
 	}
 
@@ -302,3 +345,315 @@
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_ENABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 04/15] net/avf: enable basic Rx Tx func
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (2 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 03/15] net/avf: enable queue and device Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 05/15] net/avf: enable link status update Wenzhuo Lu
                       ` (10 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |   1 +
 config/common_base               |   6 +-
 doc/guides/nics/features/avf.ini |  22 ++
 drivers/net/avf/Makefile         |   3 +
 drivers/net/avf/avf_ethdev.c     |  36 +-
 drivers/net/avf/avf_log.h        |  21 ++
 drivers/net/avf/avf_rxtx.c       | 789 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h       |  53 +++
 8 files changed, 920 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index b8b5e61..1ba9c39 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -429,6 +429,7 @@ Intel avf
 M: Jingjing Wu <jingjing.wu@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
 
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
diff --git a/config/common_base b/config/common_base
index ce4d9bb..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -228,7 +228,11 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 #
 # Compile burst-oriented AVF PMD driver
 #
-CONFIG_RTE_LIBRTE_AVF_PMD=n
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..8a294e9
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,22 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+RSS hash             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index f4f7414..1a673fa 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -13,6 +13,9 @@ LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
 LDLIBS += -lrte_bus_pci
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 605c3c4..50e14fa 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -39,6 +39,7 @@
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -53,6 +54,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -204,9 +206,12 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -407,6 +412,23 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -556,7 +578,19 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index e3f106b..8d574d3 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -20,4 +20,25 @@
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2d4fb4c..baccec4 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -34,17 +34,11 @@
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -614,3 +608,780 @@
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose tx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e227cd1..cad240d 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -19,6 +19,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -85,6 +104,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -105,6 +136,17 @@ int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -157,4 +199,15 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
 	       tx_desc->cmd_type_offset_bsz);
 }
+
+#ifdef DEBUG_DUMP_DESC
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
 #endif /* _AVF_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 05/15] net/avf: enable link status update
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (3 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 06/15] net/avf: support stats Wenzhuo Lu
                       ` (9 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  3 +++
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 51 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_vchnl.c      | 38 +++++++++++++++++++++++++++++-
 4 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 8a294e9..77e4f53 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 22886d4..c97b2ee 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -202,4 +202,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 50e14fa..42565f3 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -55,6 +55,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +430,53 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -712,7 +760,8 @@ static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
 /* Adaptive virtual function driver struct */
 static struct rte_pci_driver rte_avf_pmd = {
 	.id_table = pci_id_avf_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
 	.probe = eth_avf_pci_probe,
 	.remove = eth_avf_pci_remove,
 };
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 55a425a..f5da601 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -133,6 +133,41 @@
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -172,7 +207,8 @@
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 06/15] net/avf: support stats
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (4 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 05/15] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
                       ` (8 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 77e4f53..af84599 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -17,6 +17,7 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index c97b2ee..680b117 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -204,4 +204,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 42565f3..ac46043 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -40,6 +40,8 @@
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -56,6 +58,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -478,6 +481,30 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index f5da601..e26527f 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -693,3 +693,30 @@
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 07/15] net/avf: enable ops for MAC VLAN offload
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (5 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 06/15] net/avf: support stats Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
                       ` (7 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   5 +
 drivers/net/avf/avf.h            |   5 +
 drivers/net/avf/avf_ethdev.c     | 219 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      |  90 ++++++++++++++++
 4 files changed, 319 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index af84599..1dd6114 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -11,7 +11,12 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 RSS hash             = Y
+VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 680b117..ea48310 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -206,4 +206,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index ac46043..65499b1 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -42,6 +42,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+				struct ether_addr *addr,
+				uint32_t index,
+				uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -59,6 +73,14 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -67,6 +89,7 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -480,6 +503,202 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e26527f..3b652bf 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -720,3 +720,93 @@
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		   bool enable_unicast,
+		   bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) +
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 08/15] net/avf: enable ops for RSS setting
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (6 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
                       ` (6 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     | 142 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 1dd6114..61527d7 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -16,6 +16,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 65499b1..4a3f262 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -54,6 +54,16 @@ static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -90,6 +100,10 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -654,6 +668,134 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 09/15] net/avf: enable ops for MTU setting
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (7 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
                       ` (5 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf_ethdev.c     | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 61527d7..cf1b246 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -8,6 +8,7 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 4a3f262..12d52fe 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -64,6 +64,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -104,6 +105,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -796,6 +798,34 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 10/15] net/avf: enable ops to check queue info and status
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (8 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
                       ` (4 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     |   5 ++
 drivers/net/avf/avf_rxtx.c       | 120 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h       |   7 +++
 4 files changed, 134 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index cf1b246..da4d81b 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -25,6 +25,8 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 12d52fe..680919a 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -105,6 +105,11 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index baccec4..0fea8f9 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1385,3 +1385,123 @@
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+	       (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index cad240d..e248f55 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -147,6 +147,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 11/15] net/i40e: support AVF basic interface
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (9 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
                       ` (3 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  69 ++++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   5 ++
 drivers/net/i40e/i40e_pf.c     | 140 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 195 insertions(+), 25 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 811cc9f..696d015 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3678,6 +3678,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3694,14 +3695,22 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++) {
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3725,8 +3734,17 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i),
+					       lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6698,17 +6716,20 @@ struct i40e_vsi *
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6725,8 +6746,18 @@ struct i40e_vsi *
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6738,6 +6769,7 @@ struct i40e_vsi *
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6754,11 +6786,22 @@ struct i40e_vsi *
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		}
 	}
-	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd67453..89dd611 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -426,6 +426,9 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	/* version of the virtchnl from VF */
+	struct virtchnl_version_info version;
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1198,6 +1201,8 @@ void i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 94bb0cf..7317d19 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -273,19 +273,23 @@
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -309,11 +313,13 @@
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -337,11 +343,35 @@
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default,
+	 * doesn't support hena setting by virtchnnl yet.
+	 */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR
+	 * without binding queue to interrupt vector.
+	 */
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1090,6 +1120,84 @@
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1196,7 +1304,7 @@
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1204,7 +1312,7 @@
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1265,6 +1373,14 @@
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 0411663..196d71e 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -37,6 +37,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 12/15] net/avf: enable sse vector Rx Tx func
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (10 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
                       ` (2 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 doc/guides/nics/features/avf_vec.ini  |  36 ++
 drivers/net/avf/Makefile              |   1 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 172 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  36 +-
 drivers/net/avf/avf_rxtx_vec_common.h | 210 +++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 656 ++++++++++++++++++++++++++++++++++
 9 files changed, 1116 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..f9363ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..45dd5e5
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,36 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 1a673fa..14fa38a 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index ea48310..b79bc5a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -119,6 +119,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 680919a..692055f 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,17 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 0fea8f9..b542532 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -92,6 +92,34 @@
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if ((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS &&
+	    txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST &&
+	    txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) {
+		PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -225,6 +253,14 @@
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -325,7 +361,12 @@
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -337,6 +378,8 @@
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -416,6 +459,12 @@
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -514,7 +563,7 @@
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -542,7 +591,7 @@
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -557,7 +606,7 @@
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -571,7 +620,7 @@
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -595,7 +644,7 @@
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -603,7 +652,7 @@
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1320,6 +1369,27 @@
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1372,18 +1442,64 @@
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
 /* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1505,3 +1621,39 @@
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(__rte_unused void *rx_queue,
+		  __rte_unused struct rte_mbuf **rx_pkts,
+		  __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue,
+			    __rte_unused struct rte_mbuf **rx_pkts,
+			    __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue,
+			 __rte_unused struct rte_mbuf **tx_pkts,
+			 __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int __attribute__((weak))
+avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e248f55..82fd801 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -16,6 +16,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -45,6 +54,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -61,7 +78,12 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
-	uint16_t port_id;       /* device port ID */
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
 	uint16_t rx_buf_len;    /* The packet buffer size */
@@ -70,6 +92,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -102,6 +125,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -155,6 +179,16 @@ void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..56a23a7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned int pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len) {
+					end->data_len -= rxq->crc_len;
+				} else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = NULL;
+				end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..8f389f3
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,656 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+			pkt_mb4);
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+			pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+	unsigned int i = 0;
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (!rxq->pkt_first_seg &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	if (!rxq->pkt_first_seg) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			 ((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			 ((uint64_t)pkt->data_len <<
+			  AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+	uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+	nb_commit = nb_pkts;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	avf_vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 13/15] net/avf: enable bulk allocate Rx func
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (11 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 15/15] doc: update doc for avf driver Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index b79bc5a..ea0f7d8 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -120,6 +120,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 692055f..dba6ea8 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index b542532..e0c4583 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -120,6 +120,27 @@
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -138,6 +159,11 @@
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -233,6 +259,17 @@
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -363,6 +400,19 @@
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1036,6 +1086,252 @@
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail register */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1467,6 +1763,10 @@
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 82fd801..d1701cd 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -83,6 +83,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 14/15] net/avf: enable Rx interrupt support
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (12 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 15/15] doc: update doc for avf driver Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini     |   1 +
 doc/guides/nics/features/avf_vec.ini |   1 +
 drivers/net/avf/avf_ethdev.c         | 204 ++++++++++++++++++++++++++++-------
 3 files changed, 170 insertions(+), 36 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index da4d81b..ccb9edd 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
index 45dd5e5..8924994 100644
--- a/doc/guides/nics/features/avf_vec.ini
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index dba6ea8..14bdac8 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		vf->nb_msix = 1;
+		if (vf->vf_res->vf_offload_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			vf->msix_base = AVF_RX_VEC_START;
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+				      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+				      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			vf->msix_base = AVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval = avf_calc_itr_interval(
+					AVF_QUEUE_ITR_INTERVAL_MAX);
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+				      AVFINT_DYN_CTL01_INTENA_MASK |
+				      (AVF_ITR_INDEX_DEFAULT <<
+				       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				      (interval <<
+				       AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		AVF_WRITE_FLUSH(hw);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_queue;
 	}
 
-	/* Map interrupt for writeback */
-	vf->nb_msix = 1;
-	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
-		/* If WB_ON_ITR supports, enable it */
-		vf->msix_base = AVF_RX_VEC_START;
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
-			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
-			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
-	} else {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->msix_base = AVF_MISC_VEC_ID;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-	}
-	AVF_WRITE_FLUSH(hw);
-	/* map all queues to the same interrupt */
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		vf->rxq_map[0] |= 1 << i;
-	if (avf_config_irq_map(adapter)) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
 		goto err_queue;
 	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
 
 	/* Set all mac addrs */
 	avf_add_del_all_mac_addr(adapter, TRUE);
@@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else {
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+	}
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v3 15/15] doc: update doc for avf driver
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
                       ` (13 preceding siblings ...)
  2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
@ 2018-01-04  5:27     ` Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-04  5:27 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/intel_vf.rst           | 16 ++++++++++++++--
 doc/guides/rel_notes/release_18_02.rst | 16 ++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..3adb684 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,18 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..0672b0e 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,22 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Add AVF (Adaptive Virtual Function) net PMD.**
+
+     A new net PMD has been added, which supports Intel® Ethernet Adaptive
+     Virtual Function (AVF) with features list below:
+
+     * Basic Rx/Tx burst
+     * SSE vectorized Rx/Tx burst
+     * Promiscuous mode
+     * MAC/VLAN offload
+     * Checksum offload
+     * TSO offload
+     * Jumbo frame and MTU setting
+     * RSS configuration
+     * stats
+     * Rx/Tx descriptor status
+     * Link status update/event
 
 API Changes
 -----------
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 00/15] add new AVF PMD
  2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
                     ` (15 preceding siblings ...)
  2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
@ 2018-01-05  8:21   ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD Wenzhuo Lu
                       ` (15 more replies)
  16 siblings, 16 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens to be an adaptive VF driver, every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those advanced features. Most importantly in a device agnostic way without ever compromising on the base functionality. All the AVF's interface need to follow AVF spec, and AVF compliant interface is supported start from the Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v4:
 - update the base code to the newest.

v3:
 - change the license announcement.
 - update the related document.
 - resolve the checkpatch error, warning and some check.
 - handle the comments from the community.

v2:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (13):
  net/avf/base: add base code for avf PMD
  net/avf: initialization of avf PMD
  net/avf: enable queue and device
  net/avf: enable link status update
  net/avf: support stats
  net/avf: enable ops for MAC VLAN offload
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support
  doc: update doc for avf driver

Wenzhuo Lu (2):
  net/avf: enable basic Rx Tx func
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   37 +
 doc/guides/nics/features/avf_vec.ini    |   37 +
 doc/guides/nics/intel_vf.rst            |   16 +-
 doc/guides/rel_notes/release_18_02.rst  |   16 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   36 +
 drivers/net/avf/avf.h                   |  219 +++
 drivers/net/avf/avf_ethdev.c            | 1451 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   44 +
 drivers/net/avf/avf_rxtx.c              | 1959 +++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  260 +++
 drivers/net/avf/avf_rxtx_vec_common.h   |  210 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  656 +++++++
 drivers/net/avf/avf_vchnl.c             |  812 +++++++++
 drivers/net/avf/base/README             |   19 +
 drivers/net/avf/base/avf_adminq.c       | 1010 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2842 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1845 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  164 ++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  108 ++
 drivers/net/avf/base/avf_type.h         | 2024 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  787 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   69 +-
 drivers/net/i40e/i40e_ethdev.h          |    5 +
 drivers/net/i40e/i40e_pf.c              |  140 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 37 files changed, 16038 insertions(+), 27 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05 20:25       ` Stephen Hemminger
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of " Wenzhuo Lu
                       ` (14 subsequent siblings)
  15 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu, Wenzhuo Lu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                           |    5 +
 drivers/net/avf/avf_log.h             |   23 +
 drivers/net/avf/base/README           |   19 +
 drivers/net/avf/base/avf_adminq.c     | 1010 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2842 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1845 +++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  164 ++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  108 ++
 drivers/net/avf/base/avf_type.h       | 2024 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  787 +++++++++
 17 files changed, 10098 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a2c2fb..b8b5e61 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -425,6 +425,11 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..e3f106b
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/base/README b/drivers/net/avf/base/README
new file mode 100644
index 0000000..4710ae2
--- /dev/null
+++ b/drivers/net/avf/base/README
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+Intel® AVF driver
+=================
+
+This directory contains source code of FreeBSD AVF driver of version
+cid-avf.2018.01.02.tar.gz released by the team which develops
+basic drivers for any AVF NIC. The directory of base/ contains the
+original source package.
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    avf_osdep.h
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..616e2a9
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1010 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		if (rd32(hw, hw->aq.asq.len) & AVF_ATQLEN1_ATQCRIT_MASK) {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: AQ Critical error.\n");
+			status = AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
+		} else {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Writeback timeout.\n");
+			status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+		}
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (!avf_is_vf(hw))
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_PF_ARQH_ARQH_MASK;
+	else
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#else
+	ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
+
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..d7d242a
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,166 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..1709f31
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2842 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+	avf_aqc_opc_set_dcb_parameters = 0x0303,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_nvm_progress		= 0x0706,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP	= 0xFD,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	u8	lb_level;
+#define AVF_AQ_LB_NONE	0
+#define AVF_AQ_LB_MAC	1
+#define AVF_AQ_LB_SERDES	2
+#define AVF_AQ_LB_PHY_INT	3
+#define AVF_AQ_LB_PHY_EXT	4
+#define AVF_AQ_LB_CPVL_PCS	5
+#define AVF_AQ_LB_CPVL_EXT	6
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	lb_type;
+#define AVF_AQ_LB_LOCAL	0
+#define AVF_AQ_LB_FAR	0x01
+	u8	speed;
+#define AVF_AQ_LB_SPEED_NONE	0
+#define AVF_AQ_LB_SPEED_1G	1
+#define AVF_AQ_LB_SPEED_10G	2
+#define AVF_AQ_LB_SPEED_40G	3
+#define AVF_AQ_LB_SPEED_20G	4
+	u8	force_speed;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	__le32	reg_address;
+	__le32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD			0x01
+#define AVF_AQ_NVM_FLASH_ONLY			0x80
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT	1
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_MASK	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_ALL	0x01
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Set DCB (direct 0x0303) */
+struct avf_aqc_set_dcb_parameters {
+	u8 command;
+#define AVF_AQ_DCB_SET_AGENT	0x1
+#define AVF_DCB_VALID		0x1
+	u8 valid_flags;
+	u8 reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..bbaadad
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1845 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	case AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+	cmd->reg_value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = LE32_TO_CPU(cmd->reg_value);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_cap_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_cap_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config: pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @flags: AdminQ command flags
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..2f46bb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..e8a673b
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,108 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+	AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR	= -66,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..546c6d2
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,2024 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Max timeout in ms for the phy to respond */
+#define AVF_MAX_PHY_TIMEOUT		500
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(AVF_PHY_TYPE_25GBASE_AOC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(AVF_PHY_TYPE_25GBASE_ACC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+	AVF_NVMUPD_GET_AQ_EVENT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT			8
+#define AVF_NVM_TRANS_MASK			(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SHIFT	12
+#define AVF_NVM_PRESERVATION_FLAGS_MASK \
+				(0x3 << AVF_NVM_PRESERVATION_FLAGS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SELECTED	0x01
+#define AVF_NVM_PRESERVATION_FLAGS_ALL		0x02
+#define AVF_NVM_CON				0x0
+#define AVF_NVM_SNT				0x1
+#define AVF_NVM_LCB				0x2
+#define AVF_NVM_SA				(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA				0x4
+#define AVF_NVM_CSUM				0x8
+#define AVF_NVM_AQE				0xe
+#define AVF_NVM_EXEC				0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* (Q)SFP module access definitions */
+#define AVF_I2C_EEPROM_DEV_ADDR	0xA0
+#define AVF_I2C_EEPROM_DEV_ADDR2	0xA2
+#define AVF_MODULE_TYPE_ADDR		0x00
+#define AVF_MODULE_REVISION_ADDR	0x01
+#define AVF_MODULE_SFF_8472_COMP	0x5E
+#define AVF_MODULE_SFF_8472_SWAP	0x5C
+#define AVF_MODULE_SFF_ADDR_MODE	0x04
+#define AVF_MODULE_SFF_DIAG_CAPAB	0x40
+#define AVF_MODULE_TYPE_QSFP_PLUS	0x0D
+#define AVF_MODULE_TYPE_QSFP28		0x11
+#define AVF_MODULE_QSFP_MAX_LEN	640
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_aq_desc nvm_aq_event_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+#define AVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_EMP_MODULE_PTR			0x0F
+#define AVF_SR_EMP_MODULE_PTR			0x48
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+#define AVF_SR_CONTROL_WORD_1_NVM_BANK_VALID	BIT(5)
+#define AVF_SR_NVM_MAP_STRUCTURE_TYPE		BIT(12)
+#define AVF_PTR_TYPE                           BIT(15)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR   0x06
+#define AVF_SR_LLDP_CFG_PTR    0x31
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..167518f
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,787 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of avf PMD
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05 20:29       ` Stephen Hemminger
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 03/15] net/avf: enable queue and device Wenzhuo Lu
                       ` (13 subsequent siblings)
  15 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  31 +++
 drivers/net/avf/avf.h                   | 187 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 968 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..ce4d9bb 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=n
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index ef09b4e..7696b3f 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -38,6 +38,7 @@ endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..fb520ea
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..4694cc5
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	/* queue bitmask for each vector */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..3a64c88
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,435 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	avf_disable_irq0(hw);
+
+	avf_handle_virtchnl_msg(dev);
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..ebbee31
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG,
+					    "adminq response is received,"
+					    " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/* Check API version with sync wait until version read from admin queue */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if (vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START ||
+	    (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START &&
+	     vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if (vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR ||
+		   (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR &&
+		    vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR)) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	 * add advanced/optional offload capabilities
+	 */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..78f23c5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -119,6 +119,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 03/15] net/avf: enable queue and device
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
                       ` (12 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 366 +++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 616 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 160 +++++++++++
 drivers/net/avf/avf_vchnl.c  | 359 ++++++++++++++++++++++++-
 6 files changed, 1518 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index fb520ea..f4f7414 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -27,5 +27,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 4694cc5..22886d4 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -38,6 +38,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -184,4 +191,15 @@ struct avf_cmd_info {
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 3a64c88..605c3c4 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -31,6 +31,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -40,9 +48,366 @@
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			avf_enable_vlan_strip(ad);
+		else
+			avf_disable_vlan_strip(ad);
+	}
+	return 0;
+}
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	/* Map interrupt for writeback */
+	vf->nb_msix = 1;
+	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+		/* If WB_ON_ITR supports, enable it */
+		vf->msix_base = AVF_RX_VEC_START;
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+	} else {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->msix_base = AVF_MISC_VEC_ID;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+	}
+	AVF_WRITE_FLUSH(hw);
+	/* map all queues to the same interrupt */
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		vf->rxq_map[0] |= 1 << i;
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		goto err_queue;
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -250,6 +615,7 @@
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2d4fb4c
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,616 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..e227cd1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index ebbee31..55a425a 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -25,6 +25,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -196,6 +197,48 @@
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_DISABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -274,8 +317,8 @@
 	err = avf_execute_vf_cmd(adapter, &args);
 
 	if (err) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of "
-				 "OP_GET_VF_RESOURCE");
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_GET_VF_RESOURCE");
 		return -1;
 	}
 
@@ -302,3 +345,315 @@
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_ENABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 04/15] net/avf: enable basic Rx Tx func
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (2 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 03/15] net/avf: enable queue and device Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 05/15] net/avf: enable link status update Wenzhuo Lu
                       ` (11 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |   1 +
 config/common_base               |   6 +-
 doc/guides/nics/features/avf.ini |  22 ++
 drivers/net/avf/Makefile         |   3 +
 drivers/net/avf/avf_ethdev.c     |  46 ++-
 drivers/net/avf/avf_log.h        |  21 ++
 drivers/net/avf/avf_rxtx.c       | 789 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h       |  53 +++
 8 files changed, 925 insertions(+), 16 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index b8b5e61..1ba9c39 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -429,6 +429,7 @@ Intel avf
 M: Jingjing Wu <jingjing.wu@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
 
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
diff --git a/config/common_base b/config/common_base
index ce4d9bb..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -228,7 +228,11 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 #
 # Compile burst-oriented AVF PMD driver
 #
-CONFIG_RTE_LIBRTE_AVF_PMD=n
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..8a294e9
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,22 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+RSS hash             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index f4f7414..1a673fa 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -13,6 +13,9 @@ LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
 LDLIBS += -lrte_bus_pci
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 605c3c4..4480989 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -39,6 +39,7 @@
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -53,6 +54,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -72,7 +74,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
 	/* Vlan stripping setting */
-	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
 			avf_enable_vlan_strip(ad);
 		else
@@ -94,7 +96,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
 		       AVF_MAX_NUM_QUEUES);
 
-	if (!(vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
@@ -204,9 +206,12 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -267,7 +272,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		return -1;
 	}
 
-	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
 		if (avf_init_rss(adapter) != 0) {
 			PMD_DRV_LOG(ERR, "configure rss failed");
 			goto err_rss;
@@ -281,7 +286,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* Map interrupt for writeback */
 	vf->nb_msix = 1;
-	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
 		/* If WB_ON_ITR supports, enable it */
 		vf->msix_base = AVF_RX_VEC_START;
 		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
@@ -407,6 +412,23 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -478,7 +500,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		goto err_alloc;
 	}
 	/* Allocate memort for RSS info */
-	if (vf->vf_res->vf_offload_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
 		vf->rss_key = rte_zmalloc("rss_key",
 					  vf->vf_res->rss_key_size, 0);
 		if (!vf->rss_key) {
@@ -556,7 +578,19 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index e3f106b..8d574d3 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -20,4 +20,25 @@
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2d4fb4c..baccec4 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -34,17 +34,11 @@
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -614,3 +608,780 @@
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose tx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e227cd1..cad240d 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -19,6 +19,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -85,6 +104,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -105,6 +136,17 @@ int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -157,4 +199,15 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
 	       tx_desc->cmd_type_offset_bsz);
 }
+
+#ifdef DEBUG_DUMP_DESC
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
 #endif /* _AVF_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 05/15] net/avf: enable link status update
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (3 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 06/15] net/avf: support stats Wenzhuo Lu
                       ` (10 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  3 +++
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 51 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_vchnl.c      | 38 +++++++++++++++++++++++++++++-
 4 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 8a294e9..77e4f53 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 22886d4..c97b2ee 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -202,4 +202,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 4480989..7f7ddf9 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -55,6 +55,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +430,53 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -712,7 +760,8 @@ static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
 /* Adaptive virtual function driver struct */
 static struct rte_pci_driver rte_avf_pmd = {
 	.id_table = pci_id_avf_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
 	.probe = eth_avf_pci_probe,
 	.remove = eth_avf_pci_remove,
 };
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 55a425a..f5da601 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -133,6 +133,41 @@
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -172,7 +207,8 @@
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 06/15] net/avf: support stats
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (4 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 05/15] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
                       ` (9 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 77e4f53..af84599 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -17,6 +17,7 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index c97b2ee..680b117 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -204,4 +204,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 7f7ddf9..bf6251b 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -40,6 +40,8 @@
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -56,6 +58,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -478,6 +481,30 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index f5da601..e26527f 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -693,3 +693,30 @@
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 07/15] net/avf: enable ops for MAC VLAN offload
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (5 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 06/15] net/avf: support stats Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
                       ` (8 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   5 +
 drivers/net/avf/avf.h            |   5 +
 drivers/net/avf/avf_ethdev.c     | 219 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      |  90 ++++++++++++++++
 4 files changed, 319 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index af84599..1dd6114 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -11,7 +11,12 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 RSS hash             = Y
+VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 680b117..ea48310 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -206,4 +206,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index bf6251b..1ea6ec6 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -42,6 +42,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+				struct ether_addr *addr,
+				uint32_t index,
+				uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -59,6 +73,14 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -67,6 +89,7 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -480,6 +503,202 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e26527f..3b652bf 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -720,3 +720,93 @@
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		   bool enable_unicast,
+		   bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) +
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 08/15] net/avf: enable ops for RSS setting
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (6 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
                       ` (7 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     | 142 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 1dd6114..61527d7 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -16,6 +16,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 1ea6ec6..5a800ff 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -54,6 +54,16 @@ static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -90,6 +100,10 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -654,6 +668,134 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 09/15] net/avf: enable ops for MTU setting
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (7 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
                       ` (6 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf_ethdev.c     | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 61527d7..cf1b246 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -8,6 +8,7 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 5a800ff..e4a6f35 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -64,6 +64,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -104,6 +105,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -796,6 +798,34 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 10/15] net/avf: enable ops to check queue info and status
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (8 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
                       ` (5 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     |   5 ++
 drivers/net/avf/avf_rxtx.c       | 120 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h       |   7 +++
 4 files changed, 134 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index cf1b246..da4d81b 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -25,6 +25,8 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e4a6f35..e00bb5d 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -105,6 +105,11 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index baccec4..0fea8f9 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1385,3 +1385,123 @@
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+	       (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index cad240d..e248f55 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -147,6 +147,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 11/15] net/i40e: support AVF basic interface
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (9 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
                       ` (4 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  69 ++++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   5 ++
 drivers/net/i40e/i40e_pf.c     | 140 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 195 insertions(+), 25 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 811cc9f..696d015 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3678,6 +3678,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3694,14 +3695,22 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++) {
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3725,8 +3734,17 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i),
+					       lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6698,17 +6716,20 @@ struct i40e_vsi *
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6725,8 +6746,18 @@ struct i40e_vsi *
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6738,6 +6769,7 @@ struct i40e_vsi *
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6754,11 +6786,22 @@ struct i40e_vsi *
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		}
 	}
-	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd67453..89dd611 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -426,6 +426,9 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	/* version of the virtchnl from VF */
+	struct virtchnl_version_info version;
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1198,6 +1201,8 @@ void i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 94bb0cf..7317d19 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -273,19 +273,23 @@
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -309,11 +313,13 @@
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -337,11 +343,35 @@
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default,
+	 * doesn't support hena setting by virtchnnl yet.
+	 */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR
+	 * without binding queue to interrupt vector.
+	 */
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1090,6 +1120,84 @@
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1196,7 +1304,7 @@
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1204,7 +1312,7 @@
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1265,6 +1373,14 @@
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 0411663..196d71e 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -37,6 +37,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 12/15] net/avf: enable sse vector Rx Tx func
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (10 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
                       ` (3 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 doc/guides/nics/features/avf_vec.ini  |  36 ++
 drivers/net/avf/Makefile              |   1 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 172 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  36 +-
 drivers/net/avf/avf_rxtx_vec_common.h | 210 +++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 656 ++++++++++++++++++++++++++++++++++
 9 files changed, 1116 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..f9363ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..45dd5e5
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,36 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 1a673fa..14fa38a 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index ea48310..b79bc5a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -119,6 +119,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e00bb5d..127fdb5 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,17 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 0fea8f9..b542532 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -92,6 +92,34 @@
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if ((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS &&
+	    txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST &&
+	    txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) {
+		PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -225,6 +253,14 @@
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -325,7 +361,12 @@
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -337,6 +378,8 @@
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -416,6 +459,12 @@
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -514,7 +563,7 @@
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -542,7 +591,7 @@
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -557,7 +606,7 @@
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -571,7 +620,7 @@
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -595,7 +644,7 @@
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -603,7 +652,7 @@
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1320,6 +1369,27 @@
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1372,18 +1442,64 @@
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
 /* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1505,3 +1621,39 @@
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(__rte_unused void *rx_queue,
+		  __rte_unused struct rte_mbuf **rx_pkts,
+		  __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue,
+			    __rte_unused struct rte_mbuf **rx_pkts,
+			    __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue,
+			 __rte_unused struct rte_mbuf **tx_pkts,
+			 __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int __attribute__((weak))
+avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e248f55..82fd801 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -16,6 +16,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -45,6 +54,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -61,7 +78,12 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
-	uint16_t port_id;       /* device port ID */
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
 	uint16_t rx_buf_len;    /* The packet buffer size */
@@ -70,6 +92,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -102,6 +125,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -155,6 +179,16 @@ void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..56a23a7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned int pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len) {
+					end->data_len -= rxq->crc_len;
+				} else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = NULL;
+				end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..8f389f3
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,656 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+			pkt_mb4);
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+			pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+	unsigned int i = 0;
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (!rxq->pkt_first_seg &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	if (!rxq->pkt_first_seg) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			 ((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			 ((uint64_t)pkt->data_len <<
+			  AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+	uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+	nb_commit = nb_pkts;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	avf_vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 13/15] net/avf: enable bulk allocate Rx func
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (11 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
                       ` (2 subsequent siblings)
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index b79bc5a..ea0f7d8 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -120,6 +120,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 127fdb5..d9f7cea 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index b542532..e0c4583 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -120,6 +120,27 @@
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -138,6 +159,11 @@
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -233,6 +259,17 @@
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -363,6 +400,19 @@
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1036,6 +1086,252 @@
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail register */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1467,6 +1763,10 @@
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 82fd801..d1701cd 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -83,6 +83,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 14/15] net/avf: enable Rx interrupt support
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (12 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver Wenzhuo Lu
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
  15 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini     |   1 +
 doc/guides/nics/features/avf_vec.ini |   1 +
 drivers/net/avf/avf_ethdev.c         | 204 ++++++++++++++++++++++++++++-------
 3 files changed, 170 insertions(+), 36 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index da4d81b..ccb9edd 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
index 45dd5e5..8924994 100644
--- a/doc/guides/nics/features/avf_vec.ini
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d9f7cea..13f6329 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		vf->nb_msix = 1;
+		if (vf->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			vf->msix_base = AVF_RX_VEC_START;
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+				      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+				      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			vf->msix_base = AVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval = avf_calc_itr_interval(
+					AVF_QUEUE_ITR_INTERVAL_MAX);
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+				      AVFINT_DYN_CTL01_INTENA_MASK |
+				      (AVF_ITR_INDEX_DEFAULT <<
+				       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				      (interval <<
+				       AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		AVF_WRITE_FLUSH(hw);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_queue;
 	}
 
-	/* Map interrupt for writeback */
-	vf->nb_msix = 1;
-	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
-		/* If WB_ON_ITR supports, enable it */
-		vf->msix_base = AVF_RX_VEC_START;
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
-			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
-			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
-	} else {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->msix_base = AVF_MISC_VEC_ID;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-	}
-	AVF_WRITE_FLUSH(hw);
-	/* map all queues to the same interrupt */
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		vf->rxq_map[0] |= 1 << i;
-	if (avf_config_irq_map(adapter)) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
 		goto err_queue;
 	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
 
 	/* Set all mac addrs */
 	avf_add_del_all_mac_addr(adapter, TRUE);
@@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else {
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+	}
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (13 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
@ 2018-01-05  8:21     ` Wenzhuo Lu
  2018-01-07 15:09       ` Zhang, Helin
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
  15 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-05  8:21 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/intel_vf.rst           | 16 ++++++++++++++--
 doc/guides/rel_notes/release_18_02.rst | 16 ++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..3adb684 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,18 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..0672b0e 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,22 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Add AVF (Adaptive Virtual Function) net PMD.**
+
+     A new net PMD has been added, which supports Intel® Ethernet Adaptive
+     Virtual Function (AVF) with features list below:
+
+     * Basic Rx/Tx burst
+     * SSE vectorized Rx/Tx burst
+     * Promiscuous mode
+     * MAC/VLAN offload
+     * Checksum offload
+     * TSO offload
+     * Jumbo frame and MTU setting
+     * RSS configuration
+     * stats
+     * Rx/Tx descriptor status
+     * Link status update/event
 
 API Changes
 -----------
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD Wenzhuo Lu
@ 2018-01-05 20:25       ` Stephen Hemminger
  2018-01-08  1:06         ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2018-01-05 20:25 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev, Jingjing Wu

O
> diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
> new file mode 100644
> index 0000000..616e2a9
> --- /dev/null
> +++ b/drivers/net/avf/base/avf_adminq.c
> @@ -0,0 +1,1010 @@
> +/*******************************************************************************
> +
> +Copyright (c) 2013 - 2015, Intel Corporation
> +All rights reserved.

SPDX instead of more boilerplate.
Copyright 2018?

> +STATIC void avf_adminq_init_regs(struct avf_hw *hw)

Why is there a STATIC macro??

...

> +/**
> + *  avf_config_asq_regs - configure ASQ registers
> + *  @hw: pointer to the hardware structure
> + *
> + *  Configure base address and length registers for the transmit queue
> + **/
> +STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
> +{
> +	enum avf_status_code ret_code = AVF_SUCCESS;
> +	u32 reg = 0;
> +
> +	/* Clear Head and Tail */
> +	wr32(hw, hw->aq.asq.head, 0);
> +	wr32(hw, hw->aq.asq.tail, 0);
> +
> +	/* set starting point */
> +#ifdef INTEGRATED_VF
> +	if (avf_is_vf(hw))
> +		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
> +					  AVF_ATQLEN1_ATQENABLE_MASK));
> +#else
> +	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
> +				  AVF_ATQLEN1_ATQENABLE_MASK));
> +#endif /* INTEGRATED_VF */

No ifdef please? do it in header file if you have to.
as in:
#ifdef INTERGRATED_VF
#define avf_is_vf(hw)	(1)

...


> +/* internal (0x00XX) commands */
> +
> +/* Get version (direct 0x0001) */
> +struct avf_aqc_get_version {
> +	__le32 rom_ver;
> +	__le32 fw_build;
> +	__le16 fw_major;
> +	__le16 fw_minor;
> +	__le16 api_major;
> +	__le16 api_minor;
> +};

The use of __le16 and __le32 is a Linux kernel code
style, typically not used in DPDK userland.

Are you trying to share code here?

...

> +/**
> + * virtchnl_vc_validate_vf_msg
> + * @ver: Virtchnl version info
> + * @v_opcode: Opcode for the message
> + * @msg: pointer to the msg buffer
> + * @msglen: msg length
> + *
> + * validate msg format against struct for each opcode
> + */
> +static inline int
> +virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
> +			    u8 *msg, u16 msglen)
> +{
>

This function is way to big to be an inline.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of avf PMD
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-05 20:29       ` Stephen Hemminger
  2018-01-08  1:56         ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2018-01-05 20:29 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev, Jingjing Wu

On Fri,  5 Jan 2018 16:21:32 +0800
Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:

> From: Jingjing Wu <jingjing.wu@intel.com>
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  config/common_base                      |   5 +
>  drivers/net/Makefile                    |   1 +
>  drivers/net/avf/Makefile                |  31 +++
>  drivers/net/avf/avf.h                   | 187 ++++++++++++++
>  drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
>  drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
>  drivers/net/avf/rte_pmd_avf_version.map |   4 +
>  mk/rte.app.mk                           |   1 +
>  8 files changed, 968 insertions(+)
>  create mode 100644 drivers/net/avf/Makefile
>  create mode 100644 drivers/net/avf/avf.h
>  create mode 100644 drivers/net/avf/avf_ethdev.c
>  create mode 100644 drivers/net/avf/avf_vchnl.c
>  create mode 100644 drivers/net/avf/rte_pmd_avf_version.map
> 
> diff --git a/config/common_base b/config/common_base
> index e74febe..ce4d9bb 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
>  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>  
>  #
> +# Compile burst-oriented AVF PMD driver
> +#
> +CONFIG_RTE_LIBRTE_AVF_PMD=n

Why is the default not 'y' ?
Default is yse for IXGBE, EM and I40E already.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver Wenzhuo Lu
@ 2018-01-07 15:09       ` Zhang, Helin
  2018-01-08  2:02         ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Zhang, Helin @ 2018-01-07 15:09 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Wu, Jingjing

Is there any public spec? If yes, I'd suggest to add the link to a doc for reference.

/Helin

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Friday, January 5, 2018 4:22 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing
> Subject: [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver
> 
> From: Jingjing Wu <jingjing.wu@intel.com>
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
>  doc/guides/nics/intel_vf.rst           | 16 ++++++++++++++--
>  doc/guides/rel_notes/release_18_02.rst | 16 ++++++++++++++++
>  2 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index
> 1e83bf6..3adb684 100644
> --- a/doc/guides/nics/intel_vf.rst
> +++ b/doc/guides/nics/intel_vf.rst
> @@ -28,8 +28,8 @@
>      (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
>      OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> 
> -I40E/IXGBE/IGB Virtual Function Driver
> -======================================
> +Intel Virtual Function Driver
> +=============================
> 
>  Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for
> details)  support the following modes of operation in a virtualized
> environment:
> @@ -93,6 +93,18 @@ and the Physical Function operates on the global
> resources on behalf of the Virt  For this out-of-band communication, an SR-
> IOV enabled NIC provides a memory buffer for each Virtual Function,  which
> is called a "Mailbox".
> 
> +Intel® Ethernet Adaptive Virtual Function
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same
> device id (8086:1889) on different Intel Ethernet Controller.
> +AVF Driver is VF driver which supports for all future Intel devices
> +without requiring a VM update. And since this happens to be an adaptive
> +VF driver, every new drop of the VF driver would add more and more
> +advanced features that can be turned on in the VM if the underlying HW
> device supports those advanced features based on a device agnostic way
> without ever compromising on the base functionality. AVF provides generic
> hardware interface and interface between AVF driver and a compliant PF
> driver is specified.
> +
> +Intel products starting Ethernet Controller 710 Series to support Adaptive
> Virtual Function.
> +
> +The way to generate Virtual Function is like normal, and the resource of VF
> assignment depends on the NIC Infrastructure.
> +
>  The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF
> infrastructure
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> ^^^^^^^^^^^^^^^^^
> 
> diff --git a/doc/guides/rel_notes/release_18_02.rst
> b/doc/guides/rel_notes/release_18_02.rst
> index 24b67bb..0672b0e 100644
> --- a/doc/guides/rel_notes/release_18_02.rst
> +++ b/doc/guides/rel_notes/release_18_02.rst
> @@ -41,6 +41,22 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =========================================================
> 
> +   * **Add AVF (Adaptive Virtual Function) net PMD.**
> +
> +     A new net PMD has been added, which supports Intel® Ethernet
> Adaptive
> +     Virtual Function (AVF) with features list below:
> +
> +     * Basic Rx/Tx burst
> +     * SSE vectorized Rx/Tx burst
> +     * Promiscuous mode
> +     * MAC/VLAN offload
> +     * Checksum offload
> +     * TSO offload
> +     * Jumbo frame and MTU setting
> +     * RSS configuration
> +     * stats
> +     * Rx/Tx descriptor status
> +     * Link status update/event
> 
>  API Changes
>  -----------
> --
> 1.9.3


^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD
  2018-01-05 20:25       ` Stephen Hemminger
@ 2018-01-08  1:06         ` Lu, Wenzhuo
  2018-01-08 15:27           ` Stephen Hemminger
  0 siblings, 1 reply; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-08  1:06 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Wu, Jingjing

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, January 6, 2018 4:26 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf
> PMD
> 
> O
> > diff --git a/drivers/net/avf/base/avf_adminq.c
> > b/drivers/net/avf/base/avf_adminq.c
> > new file mode 100644
> > index 0000000..616e2a9
> > --- /dev/null
> > +++ b/drivers/net/avf/base/avf_adminq.c
> > @@ -0,0 +1,1010 @@
> >
> +/**************************************************************
> ******
> > +***********
> > +
> > +Copyright (c) 2013 - 2015, Intel Corporation All rights reserved.
> 
> SPDX instead of more boilerplate.
> Copyright 2018?
> 
> > +STATIC void avf_adminq_init_regs(struct avf_hw *hw)
> 
> Why is there a STATIC macro??
> 
> ...
> 
> > +/**
> > + *  avf_config_asq_regs - configure ASQ registers
> > + *  @hw: pointer to the hardware structure
> > + *
> > + *  Configure base address and length registers for the transmit
> > +queue  **/ STATIC enum avf_status_code avf_config_asq_regs(struct
> > +avf_hw *hw) {
> > +	enum avf_status_code ret_code = AVF_SUCCESS;
> > +	u32 reg = 0;
> > +
> > +	/* Clear Head and Tail */
> > +	wr32(hw, hw->aq.asq.head, 0);
> > +	wr32(hw, hw->aq.asq.tail, 0);
> > +
> > +	/* set starting point */
> > +#ifdef INTEGRATED_VF
> > +	if (avf_is_vf(hw))
> > +		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
> > +					  AVF_ATQLEN1_ATQENABLE_MASK));
> > +#else
> > +	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
> > +				  AVF_ATQLEN1_ATQENABLE_MASK));
> > +#endif /* INTEGRATED_VF */
> 
> No ifdef please? do it in header file if you have to.
> as in:
> #ifdef INTERGRATED_VF
> #define avf_is_vf(hw)	(1)
> 
> ...
> 
> 
> > +/* internal (0x00XX) commands */
> > +
> > +/* Get version (direct 0x0001) */
> > +struct avf_aqc_get_version {
> > +	__le32 rom_ver;
> > +	__le32 fw_build;
> > +	__le16 fw_major;
> > +	__le16 fw_minor;
> > +	__le16 api_major;
> > +	__le16 api_minor;
> > +};
> 
> The use of __le16 and __le32 is a Linux kernel code style, typically not used
> in DPDK userland.
> 
> Are you trying to share code here?
> 
> ...
> 
> > +/**
> > + * virtchnl_vc_validate_vf_msg
> > + * @ver: Virtchnl version info
> > + * @v_opcode: Opcode for the message
> > + * @msg: pointer to the msg buffer
> > + * @msglen: msg length
> > + *
> > + * validate msg format against struct for each opcode  */ static
> > +inline int virtchnl_vc_validate_vf_msg(struct virtchnl_version_info
> > +*ver, u32 v_opcode,
> > +			    u8 *msg, u16 msglen)
> > +{
> >
> 
> This function is way to big to be an inline.
Thanks for your comments. Let me explain. This is the base code, like what's in ixgbe, i40e ... We have to let it be so it's much easier for us to update it the next time. That's why the code style is a little different. And also some checkpatch problem not handled. 
We have had some discussion about the copyright license here and internally. But unfortunately we don't achieve a conclusion internally so we have to keep the long license now and may change it later.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of avf PMD
  2018-01-05 20:29       ` Stephen Hemminger
@ 2018-01-08  1:56         ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-08  1:56 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Wu, Jingjing

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Saturday, January 6, 2018 4:30 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of avf PMD
> 
> On Fri,  5 Jan 2018 16:21:32 +0800
> Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:
> 
> > From: Jingjing Wu <jingjing.wu@intel.com>
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> > ---
> >  config/common_base                      |   5 +
> >  drivers/net/Makefile                    |   1 +
> >  drivers/net/avf/Makefile                |  31 +++
> >  drivers/net/avf/avf.h                   | 187 ++++++++++++++
> >  drivers/net/avf/avf_ethdev.c            | 435
> ++++++++++++++++++++++++++++++++
> >  drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
> >  drivers/net/avf/rte_pmd_avf_version.map |   4 +
> >  mk/rte.app.mk                           |   1 +
> >  8 files changed, 968 insertions(+)
> >  create mode 100644 drivers/net/avf/Makefile  create mode 100644
> > drivers/net/avf/avf.h  create mode 100644 drivers/net/avf/avf_ethdev.c
> > create mode 100644 drivers/net/avf/avf_vchnl.c  create mode 100644
> > drivers/net/avf/rte_pmd_avf_version.map
> >
> > diff --git a/config/common_base b/config/common_base index
> > e74febe..ce4d9bb 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -226,6 +226,11 @@
> CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
> >  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >
> >  #
> > +# Compile burst-oriented AVF PMD driver #
> CONFIG_RTE_LIBRTE_AVF_PMD=n
> 
> Why is the default not 'y' ?
> Default is yse for IXGBE, EM and I40E already.
We change it to 'y' in patch 4. But you're right, we'd better setting it to 'y' at the beginning. Will change it with a V5.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver
  2018-01-07 15:09       ` Zhang, Helin
@ 2018-01-08  2:02         ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-08  2:02 UTC (permalink / raw)
  To: Zhang, Helin, dev; +Cc: Wu, Jingjing

Hi Helin,

> -----Original Message-----
> From: Zhang, Helin
> Sent: Sunday, January 7, 2018 11:09 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver
> 
> Is there any public spec? If yes, I'd suggest to add the link to a doc for
> reference.
We do have a public spec, https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf.
I'll send a V5 for it.

> 
> /Helin


^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 00/14] add new AVF PMD
  2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
                       ` (14 preceding siblings ...)
  2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver Wenzhuo Lu
@ 2018-01-08  5:13     ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
                         ` (14 more replies)
  15 siblings, 15 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens to be an adaptive VF driver, every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those advanced features. Most importantly in a device agnostic way without ever compromising on the base functionality. All the AVF's interface need to follow AVF spec, and AVF compliant interface is supported start from the Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v5:
 - some slight change for the comments.
 - merge the doc update patch.

v4:
 - update the base code to the newest.

v3:
 - change the license announcement.
 - update the related document.
 - resolve the checkpatch error, warning and some check.
 - handle the comments from the community.

v2:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (12):
  net/avf/base: add base code for avf PMD
  net/avf: initialization of avf PMD
  net/avf: enable queue and device
  net/avf: enable link status update
  net/avf: support stats
  net/avf: enable ops for MAC VLAN offload
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support

Wenzhuo Lu (2):
  net/avf: enable basic Rx Tx func
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   37 +
 doc/guides/nics/features/avf_vec.ini    |   37 +
 doc/guides/nics/intel_vf.rst            |   20 +-
 doc/guides/rel_notes/release_18_02.rst  |   16 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   36 +
 drivers/net/avf/avf.h                   |  219 +++
 drivers/net/avf/avf_ethdev.c            | 1451 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   44 +
 drivers/net/avf/avf_rxtx.c              | 1959 +++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  260 +++
 drivers/net/avf/avf_rxtx_vec_common.h   |  210 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  656 +++++++
 drivers/net/avf/avf_vchnl.c             |  812 +++++++++
 drivers/net/avf/base/README             |   19 +
 drivers/net/avf/base/avf_adminq.c       | 1010 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2842 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1845 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  164 ++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  108 ++
 drivers/net/avf/base/avf_type.h         | 2024 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  787 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   69 +-
 drivers/net/i40e/i40e_ethdev.h          |    5 +
 drivers/net/i40e/i40e_pf.c              |  140 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 37 files changed, 16042 insertions(+), 27 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 01/14] net/avf/base: add base code for avf PMD
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of " Wenzhuo Lu
                         ` (13 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu, Wenzhuo Lu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                           |    5 +
 drivers/net/avf/avf_log.h             |   23 +
 drivers/net/avf/base/README           |   19 +
 drivers/net/avf/base/avf_adminq.c     | 1010 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2842 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1845 +++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  164 ++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  108 ++
 drivers/net/avf/base/avf_type.h       | 2024 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  787 +++++++++
 17 files changed, 10098 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e0199b1..17f15b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,11 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..e3f106b
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/base/README b/drivers/net/avf/base/README
new file mode 100644
index 0000000..4710ae2
--- /dev/null
+++ b/drivers/net/avf/base/README
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+Intel® AVF driver
+=================
+
+This directory contains source code of FreeBSD AVF driver of version
+cid-avf.2018.01.02.tar.gz released by the team which develops
+basic drivers for any AVF NIC. The directory of base/ contains the
+original source package.
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    avf_osdep.h
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..616e2a9
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1010 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		if (rd32(hw, hw->aq.asq.len) & AVF_ATQLEN1_ATQCRIT_MASK) {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: AQ Critical error.\n");
+			status = AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
+		} else {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Writeback timeout.\n");
+			status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+		}
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (!avf_is_vf(hw))
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_PF_ARQH_ARQH_MASK;
+	else
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#else
+	ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
+
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..d7d242a
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,166 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..1709f31
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2842 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+	avf_aqc_opc_set_dcb_parameters = 0x0303,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_nvm_progress		= 0x0706,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP	= 0xFD,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	u8	lb_level;
+#define AVF_AQ_LB_NONE	0
+#define AVF_AQ_LB_MAC	1
+#define AVF_AQ_LB_SERDES	2
+#define AVF_AQ_LB_PHY_INT	3
+#define AVF_AQ_LB_PHY_EXT	4
+#define AVF_AQ_LB_CPVL_PCS	5
+#define AVF_AQ_LB_CPVL_EXT	6
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	lb_type;
+#define AVF_AQ_LB_LOCAL	0
+#define AVF_AQ_LB_FAR	0x01
+	u8	speed;
+#define AVF_AQ_LB_SPEED_NONE	0
+#define AVF_AQ_LB_SPEED_1G	1
+#define AVF_AQ_LB_SPEED_10G	2
+#define AVF_AQ_LB_SPEED_40G	3
+#define AVF_AQ_LB_SPEED_20G	4
+	u8	force_speed;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	__le32	reg_address;
+	__le32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD			0x01
+#define AVF_AQ_NVM_FLASH_ONLY			0x80
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT	1
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_MASK	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_ALL	0x01
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Set DCB (direct 0x0303) */
+struct avf_aqc_set_dcb_parameters {
+	u8 command;
+#define AVF_AQ_DCB_SET_AGENT	0x1
+#define AVF_DCB_VALID		0x1
+	u8 valid_flags;
+	u8 reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..bbaadad
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1845 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	case AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+	cmd->reg_value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = LE32_TO_CPU(cmd->reg_value);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_cap_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_cap_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config: pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @flags: AdminQ command flags
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..2f46bb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..e8a673b
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,108 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+	AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR	= -66,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..546c6d2
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,2024 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Max timeout in ms for the phy to respond */
+#define AVF_MAX_PHY_TIMEOUT		500
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(AVF_PHY_TYPE_25GBASE_AOC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(AVF_PHY_TYPE_25GBASE_ACC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+	AVF_NVMUPD_GET_AQ_EVENT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT			8
+#define AVF_NVM_TRANS_MASK			(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SHIFT	12
+#define AVF_NVM_PRESERVATION_FLAGS_MASK \
+				(0x3 << AVF_NVM_PRESERVATION_FLAGS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SELECTED	0x01
+#define AVF_NVM_PRESERVATION_FLAGS_ALL		0x02
+#define AVF_NVM_CON				0x0
+#define AVF_NVM_SNT				0x1
+#define AVF_NVM_LCB				0x2
+#define AVF_NVM_SA				(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA				0x4
+#define AVF_NVM_CSUM				0x8
+#define AVF_NVM_AQE				0xe
+#define AVF_NVM_EXEC				0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* (Q)SFP module access definitions */
+#define AVF_I2C_EEPROM_DEV_ADDR	0xA0
+#define AVF_I2C_EEPROM_DEV_ADDR2	0xA2
+#define AVF_MODULE_TYPE_ADDR		0x00
+#define AVF_MODULE_REVISION_ADDR	0x01
+#define AVF_MODULE_SFF_8472_COMP	0x5E
+#define AVF_MODULE_SFF_8472_SWAP	0x5C
+#define AVF_MODULE_SFF_ADDR_MODE	0x04
+#define AVF_MODULE_SFF_DIAG_CAPAB	0x40
+#define AVF_MODULE_TYPE_QSFP_PLUS	0x0D
+#define AVF_MODULE_TYPE_QSFP28		0x11
+#define AVF_MODULE_QSFP_MAX_LEN	640
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_aq_desc nvm_aq_event_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+#define AVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_EMP_MODULE_PTR			0x0F
+#define AVF_SR_EMP_MODULE_PTR			0x48
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+#define AVF_SR_CONTROL_WORD_1_NVM_BANK_VALID	BIT(5)
+#define AVF_SR_NVM_MAP_STRUCTURE_TYPE		BIT(12)
+#define AVF_PTR_TYPE                           BIT(15)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR   0x06
+#define AVF_SR_LLDP_CFG_PTR    0x31
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..167518f
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,787 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of avf PMD
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-09 17:58         ` Ferruh Yigit
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 03/14] net/avf: enable queue and device Wenzhuo Lu
                         ` (12 subsequent siblings)
  14 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  31 +++
 drivers/net/avf/avf.h                   | 187 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 968 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..f333209 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 84b137f..c2fd7f5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -10,6 +10,7 @@ endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..fb520ea
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..4694cc5
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	/* queue bitmask for each vector */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..0ed6e1c
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,435 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	avf_disable_irq0(hw);
+
+	avf_handle_virtchnl_msg(dev);
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..ebbee31
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG,
+					    "adminq response is received,"
+					    " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/* Check API version with sync wait until version read from admin queue */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if (vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START ||
+	    (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START &&
+	     vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if (vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR ||
+		   (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR &&
+		    vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR)) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	 * add advanced/optional offload capabilities
+	 */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..78f23c5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -119,6 +119,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 03/14] net/avf: enable queue and device
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
                         ` (11 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 366 +++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 616 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 160 +++++++++++
 drivers/net/avf/avf_vchnl.c  | 359 ++++++++++++++++++++++++-
 6 files changed, 1518 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index fb520ea..f4f7414 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -27,5 +27,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 4694cc5..22886d4 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -38,6 +38,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -184,4 +191,15 @@ struct avf_cmd_info {
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 0ed6e1c..c53f00e 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -31,6 +31,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -40,9 +48,366 @@
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			avf_enable_vlan_strip(ad);
+		else
+			avf_disable_vlan_strip(ad);
+	}
+	return 0;
+}
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	/* Map interrupt for writeback */
+	vf->nb_msix = 1;
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+		/* If WB_ON_ITR supports, enable it */
+		vf->msix_base = AVF_RX_VEC_START;
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+	} else {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->msix_base = AVF_MISC_VEC_ID;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+	}
+	AVF_WRITE_FLUSH(hw);
+	/* map all queues to the same interrupt */
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		vf->rxq_map[0] |= 1 << i;
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		goto err_queue;
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -250,6 +615,7 @@
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2d4fb4c
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,616 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..e227cd1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index ebbee31..55a425a 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -25,6 +25,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -196,6 +197,48 @@
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_DISABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -274,8 +317,8 @@
 	err = avf_execute_vf_cmd(adapter, &args);
 
 	if (err) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of "
-				 "OP_GET_VF_RESOURCE");
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_GET_VF_RESOURCE");
 		return -1;
 	}
 
@@ -302,3 +345,315 @@
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_ENABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 04/14] net/avf: enable basic Rx Tx func
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (2 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 03/14] net/avf: enable queue and device Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 05/14] net/avf: enable link status update Wenzhuo Lu
                         ` (10 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |   1 +
 config/common_base               |   4 +
 doc/guides/nics/features/avf.ini |  22 ++
 drivers/net/avf/Makefile         |   3 +
 drivers/net/avf/avf_ethdev.c     |  36 +-
 drivers/net/avf/avf_log.h        |  21 ++
 drivers/net/avf/avf_rxtx.c       | 789 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h       |  53 +++
 8 files changed, 919 insertions(+), 10 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 17f15b6..17067df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -430,6 +430,7 @@ Intel avf
 M: Jingjing Wu <jingjing.wu@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
 
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
diff --git a/config/common_base b/config/common_base
index f333209..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,10 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..8a294e9
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,22 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+RSS hash             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index f4f7414..1a673fa 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -13,6 +13,9 @@ LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
 LDLIBS += -lrte_bus_pci
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index c53f00e..4480989 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -39,6 +39,7 @@
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -53,6 +54,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -204,9 +206,12 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -407,6 +412,23 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -556,7 +578,19 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index e3f106b..8d574d3 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -20,4 +20,25 @@
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2d4fb4c..baccec4 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -34,17 +34,11 @@
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -614,3 +608,780 @@
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose tx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e227cd1..cad240d 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -19,6 +19,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -85,6 +104,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -105,6 +136,17 @@ int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -157,4 +199,15 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
 	       tx_desc->cmd_type_offset_bsz);
 }
+
+#ifdef DEBUG_DUMP_DESC
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
 #endif /* _AVF_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 05/14] net/avf: enable link status update
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (3 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 06/14] net/avf: support stats Wenzhuo Lu
                         ` (9 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  3 +++
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 51 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_vchnl.c      | 38 +++++++++++++++++++++++++++++-
 4 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 8a294e9..77e4f53 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 22886d4..c97b2ee 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -202,4 +202,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 4480989..7f7ddf9 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -55,6 +55,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +430,53 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -712,7 +760,8 @@ static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
 /* Adaptive virtual function driver struct */
 static struct rte_pci_driver rte_avf_pmd = {
 	.id_table = pci_id_avf_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
 	.probe = eth_avf_pci_probe,
 	.remove = eth_avf_pci_remove,
 };
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 55a425a..f5da601 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -133,6 +133,41 @@
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -172,7 +207,8 @@
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 06/14] net/avf: support stats
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (4 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 05/14] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
                         ` (8 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 77e4f53..af84599 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -17,6 +17,7 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index c97b2ee..680b117 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -204,4 +204,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 7f7ddf9..bf6251b 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -40,6 +40,8 @@
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -56,6 +58,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -478,6 +481,30 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index f5da601..e26527f 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -693,3 +693,30 @@
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (5 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 06/14] net/avf: support stats Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-09 17:58         ` Ferruh Yigit
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
                         ` (7 subsequent siblings)
  14 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   5 +
 drivers/net/avf/avf.h            |   5 +
 drivers/net/avf/avf_ethdev.c     | 219 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      |  90 ++++++++++++++++
 4 files changed, 319 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index af84599..1dd6114 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -11,7 +11,12 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 RSS hash             = Y
+VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 680b117..ea48310 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -206,4 +206,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index bf6251b..1ea6ec6 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -42,6 +42,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+				struct ether_addr *addr,
+				uint32_t index,
+				uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -59,6 +73,14 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -67,6 +89,7 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -480,6 +503,202 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e26527f..3b652bf 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -720,3 +720,93 @@
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		   bool enable_unicast,
+		   bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) +
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 08/14] net/avf: enable ops for RSS setting
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (6 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
                         ` (6 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     | 142 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 1dd6114..61527d7 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -16,6 +16,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 1ea6ec6..5a800ff 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -54,6 +54,16 @@ static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -90,6 +100,10 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -654,6 +668,134 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 09/14] net/avf: enable ops for MTU setting
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (7 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
                         ` (5 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf_ethdev.c     | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 61527d7..cf1b246 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -8,6 +8,7 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 5a800ff..e4a6f35 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -64,6 +64,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -104,6 +105,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -796,6 +798,34 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 10/14] net/avf: enable ops to check queue info and status
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (8 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
                         ` (4 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     |   5 ++
 drivers/net/avf/avf_rxtx.c       | 120 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h       |   7 +++
 4 files changed, 134 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index cf1b246..da4d81b 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -25,6 +25,8 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e4a6f35..e00bb5d 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -105,6 +105,11 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index baccec4..0fea8f9 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1385,3 +1385,123 @@
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+	       (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index cad240d..e248f55 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -147,6 +147,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 11/14] net/i40e: support AVF basic interface
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (9 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
                         ` (3 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  69 ++++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   5 ++
 drivers/net/i40e/i40e_pf.c     | 140 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 195 insertions(+), 25 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 285d92b..10bb4eb 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3649,6 +3649,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3665,14 +3666,22 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++) {
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3696,8 +3705,17 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i),
+					       lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6669,17 +6687,20 @@ struct i40e_vsi *
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6696,8 +6717,18 @@ struct i40e_vsi *
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6709,6 +6740,7 @@ struct i40e_vsi *
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6725,11 +6757,22 @@ struct i40e_vsi *
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		}
 	}
-	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index f2b4b70..de2797e 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -397,6 +397,9 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	/* version of the virtchnl from VF */
+	struct virtchnl_version_info version;
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1169,6 +1172,8 @@ void i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 1bca250..7508444 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -244,19 +244,23 @@
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -280,11 +284,13 @@
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -308,11 +314,35 @@
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default,
+	 * doesn't support hena setting by virtchnnl yet.
+	 */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR
+	 * without binding queue to interrupt vector.
+	 */
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1061,6 +1091,84 @@
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1167,7 +1275,7 @@
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1175,7 +1283,7 @@
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1236,6 +1344,14 @@
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 429f347..1809ba4 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -8,6 +8,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (10 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-09 17:58         ` Ferruh Yigit
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
                         ` (2 subsequent siblings)
  14 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 doc/guides/nics/features/avf_vec.ini  |  36 ++
 drivers/net/avf/Makefile              |   1 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 172 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  36 +-
 drivers/net/avf/avf_rxtx_vec_common.h | 210 +++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 656 ++++++++++++++++++++++++++++++++++
 9 files changed, 1116 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..f9363ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..45dd5e5
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,36 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 1a673fa..14fa38a 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index ea48310..b79bc5a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -119,6 +119,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e00bb5d..127fdb5 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,17 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 0fea8f9..b542532 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -92,6 +92,34 @@
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if ((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS &&
+	    txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST &&
+	    txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) {
+		PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -225,6 +253,14 @@
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -325,7 +361,12 @@
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -337,6 +378,8 @@
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -416,6 +459,12 @@
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -514,7 +563,7 @@
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -542,7 +591,7 @@
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -557,7 +606,7 @@
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -571,7 +620,7 @@
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -595,7 +644,7 @@
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -603,7 +652,7 @@
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1320,6 +1369,27 @@
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1372,18 +1442,64 @@
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
 /* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1505,3 +1621,39 @@
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(__rte_unused void *rx_queue,
+		  __rte_unused struct rte_mbuf **rx_pkts,
+		  __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue,
+			    __rte_unused struct rte_mbuf **rx_pkts,
+			    __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue,
+			 __rte_unused struct rte_mbuf **tx_pkts,
+			 __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int __attribute__((weak))
+avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e248f55..82fd801 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -16,6 +16,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -45,6 +54,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -61,7 +78,12 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
-	uint16_t port_id;       /* device port ID */
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
 	uint16_t rx_buf_len;    /* The packet buffer size */
@@ -70,6 +92,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -102,6 +125,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -155,6 +179,16 @@ void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..56a23a7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned int pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len) {
+					end->data_len -= rxq->crc_len;
+				} else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = NULL;
+				end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..8f389f3
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,656 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+			pkt_mb4);
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+			pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+	unsigned int i = 0;
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (!rxq->pkt_first_seg &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	if (!rxq->pkt_first_seg) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			 ((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			 ((uint64_t)pkt->data_len <<
+			  AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+	uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+	nb_commit = nb_pkts;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	avf_vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 13/14] net/avf: enable bulk allocate Rx func
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (11 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index b79bc5a..ea0f7d8 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -120,6 +120,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 127fdb5..d9f7cea 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index b542532..e0c4583 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -120,6 +120,27 @@
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -138,6 +159,11 @@
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -233,6 +259,17 @@
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -363,6 +400,19 @@
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1036,6 +1086,252 @@
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail register */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1467,6 +1763,10 @@
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 82fd801..d1701cd 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -83,6 +83,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v5 14/14] net/avf: enable Rx interrupt support
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (12 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
@ 2018-01-08  5:13       ` Wenzhuo Lu
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-08  5:13 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Update the doc for the AVF features either.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini       |   1 +
 doc/guides/nics/features/avf_vec.ini   |   1 +
 doc/guides/nics/intel_vf.rst           |  20 +++-
 doc/guides/rel_notes/release_18_02.rst |  16 +++
 drivers/net/avf/avf_ethdev.c           | 204 +++++++++++++++++++++++++++------
 5 files changed, 204 insertions(+), 38 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index da4d81b..ccb9edd 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
index 45dd5e5..8924994 100644
--- a/doc/guides/nics/features/avf_vec.ini
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..66f90b1 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,22 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
+For more detail on SR-IOV, please refer to the following documents:
+
+*   `Intel® AVF HAS <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..0672b0e 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,22 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Add AVF (Adaptive Virtual Function) net PMD.**
+
+     A new net PMD has been added, which supports Intel® Ethernet Adaptive
+     Virtual Function (AVF) with features list below:
+
+     * Basic Rx/Tx burst
+     * SSE vectorized Rx/Tx burst
+     * Promiscuous mode
+     * MAC/VLAN offload
+     * Checksum offload
+     * TSO offload
+     * Jumbo frame and MTU setting
+     * RSS configuration
+     * stats
+     * Rx/Tx descriptor status
+     * Link status update/event
 
 API Changes
 -----------
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d9f7cea..13f6329 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		vf->nb_msix = 1;
+		if (vf->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			vf->msix_base = AVF_RX_VEC_START;
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+				      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+				      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			vf->msix_base = AVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval = avf_calc_itr_interval(
+					AVF_QUEUE_ITR_INTERVAL_MAX);
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+				      AVFINT_DYN_CTL01_INTENA_MASK |
+				      (AVF_ITR_INDEX_DEFAULT <<
+				       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				      (interval <<
+				       AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		AVF_WRITE_FLUSH(hw);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_queue;
 	}
 
-	/* Map interrupt for writeback */
-	vf->nb_msix = 1;
-	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
-		/* If WB_ON_ITR supports, enable it */
-		vf->msix_base = AVF_RX_VEC_START;
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
-			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
-			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
-	} else {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->msix_base = AVF_MISC_VEC_ID;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-	}
-	AVF_WRITE_FLUSH(hw);
-	/* map all queues to the same interrupt */
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		vf->rxq_map[0] |= 1 << i;
-	if (avf_config_irq_map(adapter)) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
 		goto err_queue;
 	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
 
 	/* Set all mac addrs */
 	avf_add_del_all_mac_addr(adapter, TRUE);
@@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else {
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+	}
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD
  2018-01-08  1:06         ` Lu, Wenzhuo
@ 2018-01-08 15:27           ` Stephen Hemminger
  2018-01-09  1:35             ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2018-01-08 15:27 UTC (permalink / raw)
  To: Lu, Wenzhuo; +Cc: dev, Wu, Jingjing

On Mon, 8 Jan 2018 01:06:36 +0000
"Lu, Wenzhuo" <wenzhuo.lu@intel.com> wrote:

> > This function is way to big to be an inline.  
> Thanks for your comments. Let me explain. This is the base code, like what's in ixgbe, i40e ... We have to let it be so it's much easier for us to update it the next time. That's why the code style is a little different. And also some checkpatch problem not handled. 
> We have had some discussion about the copyright license here and internally. But unfortunately we don't achieve a conclusion internally so we have to keep the long license now and may change it later.


If you have to have base code then put it in a subdirectory base/
that way reviewers and maintainers can separate DPDK part from ugly stuff.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD
  2018-01-08 15:27           ` Stephen Hemminger
@ 2018-01-09  1:35             ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-09  1:35 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Wu, Jingjing

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, January 8, 2018 11:28 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf
> PMD
> 
> On Mon, 8 Jan 2018 01:06:36 +0000
> "Lu, Wenzhuo" <wenzhuo.lu@intel.com> wrote:
> 
> > > This function is way to big to be an inline.
> > Thanks for your comments. Let me explain. This is the base code, like
> what's in ixgbe, i40e ... We have to let it be so it's much easier for us to
> update it the next time. That's why the code style is a little different. And
> also some checkpatch problem not handled.
> > We have had some discussion about the copyright license here and
> internally. But unfortunately we don't achieve a conclusion internally so we
> have to keep the long license now and may change it later.
> 
> 
> If you have to have base code then put it in a subdirectory base/ that way
> reviewers and maintainers can separate DPDK part from ugly stuff.
Yes, that's we've done. And we also use an individual patch for it :)

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of avf PMD
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-09 17:58         ` Ferruh Yigit
  2018-01-10  2:59           ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-09 17:58 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Jingjing Wu

On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> From: Jingjing Wu <jingjing.wu@intel.com>
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> @@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
>  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
>  
>  #
> +# Compile burst-oriented AVF PMD driver
> +#
> +CONFIG_RTE_LIBRTE_AVF_PMD=y

is 32bit supported? I am getting following errors [1]

.../drivers/net/avf/base/avf_common.c: In function ‘avf_aq_set_arp_proxy_config’:
.../drivers/net/avf/base/avf_common.c:1262:32: warning: cast from pointer to
integer of different size [-Wpointer-to-int-cast]
       CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
                                ^
.../i686-native-linuxapp-gcc/include/rte_byteorder.h:73:30: note: in definition
of macro ‘rte_cpu_to_le_32’
 #define rte_cpu_to_le_32(x) (x)
                              ^
.../drivers/net/avf/base/avf_common.c:1262:7: note: in expansion of macro
‘CPU_TO_LE32’
       CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
       ^~~~~~~~~~~
.../drivers/net/avf/base/avf_common.c:1262:19: note: in expansion of macro
‘AVF_HI_DWORD’
       CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
                   ^~~~~~~~~~~~
.../drivers/net/avf/base/avf_common.c:1264:32: warning: cast from pointer to
integer of different size [-Wpointer-to-int-cast]
       CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
                                ^
.../i686-native-linuxapp-gcc/include/rte_byteorder.h:73:30: note: in definition
of macro ‘rte_cpu_to_le_32’
 #define rte_cpu_to_le_32(x) (x)
                              ^
.../drivers/net/avf/base/avf_common.c:1264:7: note: in expansion of macro
‘CPU_TO_LE32’
       CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
       ^~~~~~~~~~~
.../drivers/net/avf/base/avf_common.c:1264:19: note: in expansion of macro
‘AVF_LO_DWORD’
       CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
                   ^~~~~~~~~~~~

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-09 17:58         ` Ferruh Yigit
  2018-01-10  1:38           ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-09 17:58 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Jingjing Wu

On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> From: Jingjing Wu <jingjing.wu@intel.com>
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

> @@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
> +SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c

You may need to wrap this with an arch check to not break other architecture builds.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
@ 2018-01-09 17:58         ` Ferruh Yigit
  2018-01-10  1:39           ` Lu, Wenzhuo
  0 siblings, 1 reply; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-09 17:58 UTC (permalink / raw)
  To: Wenzhuo Lu, dev; +Cc: Jingjing Wu

On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> From: Jingjing Wu <jingjing.wu@intel.com>
> 
>  - promiscuous_enable
>  - promiscuous_disable
>  - allmulticast_enable
>  - allmulticast_disable
>  - mac_addr_add
>  - mac_addr_remove
>  - mac_addr_set
>  - vlan_filter_set
>  - vlan_offload_set

Patch subject is not accurate, can you please update it?

> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

<...>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-09 17:58         ` Ferruh Yigit
@ 2018-01-10  1:38           ` Lu, Wenzhuo
  2018-01-10  9:57             ` Ferruh Yigit
  0 siblings, 1 reply; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-10  1:38 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, January 10, 2018 1:58 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx
> func
> 
> On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> > From: Jingjing Wu <jingjing.wu@intel.com>
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > @@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) +=
> avf_common.c
> >  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
> >  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
> >  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
> > +SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
> 
> You may need to wrap this with an arch check to not break other
> architecture builds.
I think we don't need to wrap it because I remember it's said SSE is supported by default.
For example, before support ARM64 by this patch, 
'b20971b6cca0d01c41ff06e161581754810bfeb7 net/ixgbe: implement vector driver for ARM'
Ixgbe makefile just looks like this.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload
  2018-01-09 17:58         ` Ferruh Yigit
@ 2018-01-10  1:39           ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-10  1:39 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, January 10, 2018 1:58 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN
> offload
> 
> On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> > From: Jingjing Wu <jingjing.wu@intel.com>
> >
> >  - promiscuous_enable
> >  - promiscuous_disable
> >  - allmulticast_enable
> >  - allmulticast_disable
> >  - mac_addr_add
> >  - mac_addr_remove
> >  - mac_addr_set
> >  - vlan_filter_set
> >  - vlan_offload_set
> 
> Patch subject is not accurate, can you please update it?
Sure, will update it.

> 
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>


^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of avf PMD
  2018-01-09 17:58         ` Ferruh Yigit
@ 2018-01-10  2:59           ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-10  2:59 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Wu, Jingjing

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, January 10, 2018 1:58 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of avf PMD
> 
> On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
> > From: Jingjing Wu <jingjing.wu@intel.com>
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> 
> <...>
> 
> > @@ -226,6 +226,11 @@
> CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
> >  CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
> >
> >  #
> > +# Compile burst-oriented AVF PMD driver #
> CONFIG_RTE_LIBRTE_AVF_PMD=y
> 
> is 32bit supported? I am getting following errors [1]
O it’s a warning that can be ignored. That's why  when we sent the V2, the system said its compilation is successful. 
http://dpdk.org/dev/patchwork/patch/31618/
Will resolve it by a V6.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD
  2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
                         ` (13 preceding siblings ...)
  2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
@ 2018-01-10  6:15       ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
                           ` (14 more replies)
  14 siblings, 15 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens to be an adaptive VF driver, every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those advanced features. Most importantly in a device agnostic way without ever compromising on the base functionality. All the AVF's interface need to follow AVF spec, and AVF compliant interface is supported start from the Intel® Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v6:
 - handle ICC compile warning on 32bit machine.

v5:
 - some slight change for the comments.
 - merge the doc update patch.

v4:
 - update the base code to the newest.

v3:
 - change the license announcement.
 - update the related document.
 - resolve the checkpatch error, warning and some check.
 - handle the comments from the community.

v2:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (12):
  net/avf/base: add base code for avf PMD
  net/avf: initialization of avf PMD
  net/avf: enable queue and device
  net/avf: enable link status update
  net/avf: support stats
  net/avf: enable MAC VLAN and promisc ops
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support

Wenzhuo Lu (2):
  net/avf: enable basic Rx Tx func
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   37 +
 doc/guides/nics/features/avf_vec.ini    |   37 +
 doc/guides/nics/intel_vf.rst            |   20 +-
 doc/guides/rel_notes/release_18_02.rst  |   16 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   52 +
 drivers/net/avf/avf.h                   |  219 +++
 drivers/net/avf/avf_ethdev.c            | 1451 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   44 +
 drivers/net/avf/avf_rxtx.c              | 1959 +++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  260 +++
 drivers/net/avf/avf_rxtx_vec_common.h   |  210 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  656 +++++++
 drivers/net/avf/avf_vchnl.c             |  812 +++++++++
 drivers/net/avf/base/README             |   19 +
 drivers/net/avf/base/avf_adminq.c       | 1010 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2842 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1845 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  164 ++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  108 ++
 drivers/net/avf/base/avf_type.h         | 2024 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  787 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   69 +-
 drivers/net/i40e/i40e_ethdev.h          |    5 +
 drivers/net/i40e/i40e_pf.c              |  140 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 37 files changed, 16058 insertions(+), 27 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 01/14] net/avf/base: add base code for avf PMD
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of " Wenzhuo Lu
                           ` (13 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu, Wenzhuo Lu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                           |    5 +
 drivers/net/avf/avf_log.h             |   23 +
 drivers/net/avf/base/README           |   19 +
 drivers/net/avf/base/avf_adminq.c     | 1010 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2842 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1845 +++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  164 ++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  108 ++
 drivers/net/avf/base/avf_type.h       | 2024 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  787 +++++++++
 17 files changed, 10098 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e0199b1..17f15b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,11 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..e3f106b
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/base/README b/drivers/net/avf/base/README
new file mode 100644
index 0000000..4710ae2
--- /dev/null
+++ b/drivers/net/avf/base/README
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+Intel® AVF driver
+=================
+
+This directory contains source code of FreeBSD AVF driver of version
+cid-avf.2018.01.02.tar.gz released by the team which develops
+basic drivers for any AVF NIC. The directory of base/ contains the
+original source package.
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    avf_osdep.h
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..616e2a9
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1010 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		if (rd32(hw, hw->aq.asq.len) & AVF_ATQLEN1_ATQCRIT_MASK) {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: AQ Critical error.\n");
+			status = AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
+		} else {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Writeback timeout.\n");
+			status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+		}
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (!avf_is_vf(hw))
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_PF_ARQH_ARQH_MASK;
+	else
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#else
+	ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
+
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..d7d242a
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,166 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..1709f31
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2842 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+	avf_aqc_opc_set_dcb_parameters = 0x0303,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_nvm_progress		= 0x0706,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP	= 0xFD,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	u8	lb_level;
+#define AVF_AQ_LB_NONE	0
+#define AVF_AQ_LB_MAC	1
+#define AVF_AQ_LB_SERDES	2
+#define AVF_AQ_LB_PHY_INT	3
+#define AVF_AQ_LB_PHY_EXT	4
+#define AVF_AQ_LB_CPVL_PCS	5
+#define AVF_AQ_LB_CPVL_EXT	6
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	lb_type;
+#define AVF_AQ_LB_LOCAL	0
+#define AVF_AQ_LB_FAR	0x01
+	u8	speed;
+#define AVF_AQ_LB_SPEED_NONE	0
+#define AVF_AQ_LB_SPEED_1G	1
+#define AVF_AQ_LB_SPEED_10G	2
+#define AVF_AQ_LB_SPEED_40G	3
+#define AVF_AQ_LB_SPEED_20G	4
+	u8	force_speed;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	__le32	reg_address;
+	__le32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD			0x01
+#define AVF_AQ_NVM_FLASH_ONLY			0x80
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT	1
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_MASK	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_ALL	0x01
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Set DCB (direct 0x0303) */
+struct avf_aqc_set_dcb_parameters {
+	u8 command;
+#define AVF_AQ_DCB_SET_AGENT	0x1
+#define AVF_DCB_VALID		0x1
+	u8 valid_flags;
+	u8 reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..bbaadad
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1845 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	case AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+	cmd->reg_value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = LE32_TO_CPU(cmd->reg_value);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_cap_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_cap_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config: pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @flags: AdminQ command flags
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..2f46bb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..e8a673b
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,108 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+	AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR	= -66,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..546c6d2
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,2024 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Max timeout in ms for the phy to respond */
+#define AVF_MAX_PHY_TIMEOUT		500
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(AVF_PHY_TYPE_25GBASE_AOC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(AVF_PHY_TYPE_25GBASE_ACC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+	AVF_NVMUPD_GET_AQ_EVENT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT			8
+#define AVF_NVM_TRANS_MASK			(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SHIFT	12
+#define AVF_NVM_PRESERVATION_FLAGS_MASK \
+				(0x3 << AVF_NVM_PRESERVATION_FLAGS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SELECTED	0x01
+#define AVF_NVM_PRESERVATION_FLAGS_ALL		0x02
+#define AVF_NVM_CON				0x0
+#define AVF_NVM_SNT				0x1
+#define AVF_NVM_LCB				0x2
+#define AVF_NVM_SA				(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA				0x4
+#define AVF_NVM_CSUM				0x8
+#define AVF_NVM_AQE				0xe
+#define AVF_NVM_EXEC				0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* (Q)SFP module access definitions */
+#define AVF_I2C_EEPROM_DEV_ADDR	0xA0
+#define AVF_I2C_EEPROM_DEV_ADDR2	0xA2
+#define AVF_MODULE_TYPE_ADDR		0x00
+#define AVF_MODULE_REVISION_ADDR	0x01
+#define AVF_MODULE_SFF_8472_COMP	0x5E
+#define AVF_MODULE_SFF_8472_SWAP	0x5C
+#define AVF_MODULE_SFF_ADDR_MODE	0x04
+#define AVF_MODULE_SFF_DIAG_CAPAB	0x40
+#define AVF_MODULE_TYPE_QSFP_PLUS	0x0D
+#define AVF_MODULE_TYPE_QSFP28		0x11
+#define AVF_MODULE_QSFP_MAX_LEN	640
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_aq_desc nvm_aq_event_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+#define AVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_EMP_MODULE_PTR			0x0F
+#define AVF_SR_EMP_MODULE_PTR			0x48
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+#define AVF_SR_CONTROL_WORD_1_NVM_BANK_VALID	BIT(5)
+#define AVF_SR_NVM_MAP_STRUCTURE_TYPE		BIT(12)
+#define AVF_PTR_TYPE                           BIT(15)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR   0x06
+#define AVF_SR_LLDP_CFG_PTR    0x31
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..167518f
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,787 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10 17:15           ` Stephen Hemminger
  2018-01-10 17:17           ` Stephen Hemminger
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 03/14] net/avf: enable queue and device Wenzhuo Lu
                           ` (12 subsequent siblings)
  14 siblings, 2 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  47 ++++
 drivers/net/avf/avf.h                   | 187 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 984 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..f333209 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 84b137f..c2fd7f5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -10,6 +10,7 @@ endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..2376cfd
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER =
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER = -Wno-pointer-to-int-cast
+else
+CFLAGS_BASE_DRIVER = -Wno-pointer-to-int-cast
+
+endif
+OBJS_BASE_DRIVER=$(sort $(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c))))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..4694cc5
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	/* queue bitmask for each vector */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..0ed6e1c
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,435 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	avf_disable_irq0(hw);
+
+	avf_handle_virtchnl_msg(dev);
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..ebbee31
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG,
+					    "adminq response is received,"
+					    " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/* Check API version with sync wait until version read from admin queue */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if (vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START ||
+	    (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START &&
+	     vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if (vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR ||
+		   (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR &&
+		    vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR)) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	 * add advanced/optional offload capabilities
+	 */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..78f23c5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -119,6 +119,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 03/14] net/avf: enable queue and device
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
                           ` (11 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 366 +++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 616 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 160 +++++++++++
 drivers/net/avf/avf_vchnl.c  | 359 ++++++++++++++++++++++++-
 6 files changed, 1518 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 2376cfd..e172bf5 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -43,5 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 4694cc5..22886d4 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -38,6 +38,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -184,4 +191,15 @@ struct avf_cmd_info {
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 0ed6e1c..c53f00e 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -31,6 +31,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -40,9 +48,366 @@
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			avf_enable_vlan_strip(ad);
+		else
+			avf_disable_vlan_strip(ad);
+	}
+	return 0;
+}
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	/* Map interrupt for writeback */
+	vf->nb_msix = 1;
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+		/* If WB_ON_ITR supports, enable it */
+		vf->msix_base = AVF_RX_VEC_START;
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+	} else {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->msix_base = AVF_MISC_VEC_ID;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+	}
+	AVF_WRITE_FLUSH(hw);
+	/* map all queues to the same interrupt */
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		vf->rxq_map[0] |= 1 << i;
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		goto err_queue;
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -250,6 +615,7 @@
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2d4fb4c
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,616 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..e227cd1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index ebbee31..55a425a 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -25,6 +25,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -196,6 +197,48 @@
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_DISABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -274,8 +317,8 @@
 	err = avf_execute_vf_cmd(adapter, &args);
 
 	if (err) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of "
-				 "OP_GET_VF_RESOURCE");
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_GET_VF_RESOURCE");
 		return -1;
 	}
 
@@ -302,3 +345,315 @@
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_ENABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 04/14] net/avf: enable basic Rx Tx func
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (2 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 03/14] net/avf: enable queue and device Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update Wenzhuo Lu
                           ` (10 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |   1 +
 config/common_base               |   4 +
 doc/guides/nics/features/avf.ini |  22 ++
 drivers/net/avf/Makefile         |   3 +
 drivers/net/avf/avf_ethdev.c     |  36 +-
 drivers/net/avf/avf_log.h        |  21 ++
 drivers/net/avf/avf_rxtx.c       | 789 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h       |  53 +++
 8 files changed, 919 insertions(+), 10 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 17f15b6..17067df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -430,6 +430,7 @@ Intel avf
 M: Jingjing Wu <jingjing.wu@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
 
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
diff --git a/config/common_base b/config/common_base
index f333209..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,10 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..8a294e9
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,22 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+RSS hash             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index e172bf5..8d54fc9 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -13,6 +13,9 @@ LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
 LDLIBS += -lrte_bus_pci
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index c53f00e..4480989 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -39,6 +39,7 @@
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -53,6 +54,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -204,9 +206,12 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -407,6 +412,23 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -556,7 +578,19 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index e3f106b..8d574d3 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -20,4 +20,25 @@
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2d4fb4c..baccec4 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -34,17 +34,11 @@
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -614,3 +608,780 @@
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose tx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e227cd1..cad240d 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -19,6 +19,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -85,6 +104,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -105,6 +136,17 @@ int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -157,4 +199,15 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
 	       tx_desc->cmd_type_offset_bsz);
 }
+
+#ifdef DEBUG_DUMP_DESC
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
 #endif /* _AVF_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (3 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  9:44           ` Xing, Beilei
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 06/14] net/avf: support stats Wenzhuo Lu
                           ` (9 subsequent siblings)
  14 siblings, 1 reply; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  3 +++
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 51 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_vchnl.c      | 38 +++++++++++++++++++++++++++++-
 4 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 8a294e9..77e4f53 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 22886d4..c97b2ee 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -202,4 +202,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 4480989..7f7ddf9 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -55,6 +55,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +430,53 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -712,7 +760,8 @@ static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
 /* Adaptive virtual function driver struct */
 static struct rte_pci_driver rte_avf_pmd = {
 	.id_table = pci_id_avf_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
 	.probe = eth_avf_pci_probe,
 	.remove = eth_avf_pci_remove,
 };
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 55a425a..f5da601 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -133,6 +133,41 @@
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -172,7 +207,8 @@
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 06/14] net/avf: support stats
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (4 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
                           ` (8 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 77e4f53..af84599 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -17,6 +17,7 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index c97b2ee..680b117 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -204,4 +204,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 7f7ddf9..bf6251b 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -40,6 +40,8 @@
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -56,6 +58,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -478,6 +481,30 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index f5da601..e26527f 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -693,3 +693,30 @@
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 07/14] net/avf: enable MAC VLAN and promisc ops
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (5 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 06/14] net/avf: support stats Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
                           ` (7 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   5 +
 drivers/net/avf/avf.h            |   5 +
 drivers/net/avf/avf_ethdev.c     | 219 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      |  90 ++++++++++++++++
 4 files changed, 319 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index af84599..1dd6114 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -11,7 +11,12 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 RSS hash             = Y
+VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 680b117..ea48310 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -206,4 +206,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index bf6251b..1ea6ec6 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -42,6 +42,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+				struct ether_addr *addr,
+				uint32_t index,
+				uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -59,6 +73,14 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -67,6 +89,7 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -480,6 +503,202 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e26527f..3b652bf 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -720,3 +720,93 @@
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		   bool enable_unicast,
+		   bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) +
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 08/14] net/avf: enable ops for RSS setting
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (6 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
                           ` (6 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     | 142 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 1dd6114..61527d7 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -16,6 +16,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 1ea6ec6..5a800ff 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -54,6 +54,16 @@ static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -90,6 +100,10 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -654,6 +668,134 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 09/14] net/avf: enable ops for MTU setting
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (7 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
                           ` (5 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf_ethdev.c     | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 61527d7..cf1b246 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -8,6 +8,7 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 5a800ff..e4a6f35 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -64,6 +64,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -104,6 +105,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -796,6 +798,34 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 10/14] net/avf: enable ops to check queue info and status
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (8 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
                           ` (4 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     |   5 ++
 drivers/net/avf/avf_rxtx.c       | 120 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h       |   7 +++
 4 files changed, 134 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index cf1b246..da4d81b 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -25,6 +25,8 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e4a6f35..e00bb5d 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -105,6 +105,11 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index baccec4..0fea8f9 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1385,3 +1385,123 @@
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+	       (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index cad240d..e248f55 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -147,6 +147,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 11/14] net/i40e: support AVF basic interface
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (9 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
                           ` (3 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  69 ++++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   5 ++
 drivers/net/i40e/i40e_pf.c     | 140 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 195 insertions(+), 25 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 285d92b..10bb4eb 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3649,6 +3649,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3665,14 +3666,22 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++) {
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3696,8 +3705,17 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i),
+					       lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6669,17 +6687,20 @@ struct i40e_vsi *
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6696,8 +6717,18 @@ struct i40e_vsi *
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6709,6 +6740,7 @@ struct i40e_vsi *
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6725,11 +6757,22 @@ struct i40e_vsi *
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		}
 	}
-	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index f2b4b70..de2797e 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -397,6 +397,9 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	/* version of the virtchnl from VF */
+	struct virtchnl_version_info version;
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1169,6 +1172,8 @@ void i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 1bca250..7508444 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -244,19 +244,23 @@
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -280,11 +284,13 @@
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -308,11 +314,35 @@
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default,
+	 * doesn't support hena setting by virtchnnl yet.
+	 */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR
+	 * without binding queue to interrupt vector.
+	 */
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1061,6 +1091,84 @@
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1167,7 +1275,7 @@
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1175,7 +1283,7 @@
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1236,6 +1344,14 @@
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 429f347..1809ba4 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -8,6 +8,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (10 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
@ 2018-01-10  6:15         ` Wenzhuo Lu
  2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
                           ` (2 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:15 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 doc/guides/nics/features/avf_vec.ini  |  36 ++
 drivers/net/avf/Makefile              |   1 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 172 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  36 +-
 drivers/net/avf/avf_rxtx_vec_common.h | 210 +++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 656 ++++++++++++++++++++++++++++++++++
 9 files changed, 1116 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..f9363ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..45dd5e5
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,36 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 8d54fc9..b3c3a88 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -47,5 +47,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index ea48310..b79bc5a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -119,6 +119,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e00bb5d..127fdb5 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,17 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 0fea8f9..b542532 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -92,6 +92,34 @@
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if ((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS &&
+	    txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST &&
+	    txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) {
+		PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -225,6 +253,14 @@
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -325,7 +361,12 @@
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -337,6 +378,8 @@
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -416,6 +459,12 @@
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -514,7 +563,7 @@
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -542,7 +591,7 @@
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -557,7 +606,7 @@
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -571,7 +620,7 @@
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -595,7 +644,7 @@
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -603,7 +652,7 @@
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1320,6 +1369,27 @@
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1372,18 +1442,64 @@
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
 /* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1505,3 +1621,39 @@
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(__rte_unused void *rx_queue,
+		  __rte_unused struct rte_mbuf **rx_pkts,
+		  __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue,
+			    __rte_unused struct rte_mbuf **rx_pkts,
+			    __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue,
+			 __rte_unused struct rte_mbuf **tx_pkts,
+			 __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int __attribute__((weak))
+avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e248f55..82fd801 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -16,6 +16,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -45,6 +54,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -61,7 +78,12 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
-	uint16_t port_id;       /* device port ID */
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
 	uint16_t rx_buf_len;    /* The packet buffer size */
@@ -70,6 +92,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -102,6 +125,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -155,6 +179,16 @@ void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..56a23a7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned int pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len) {
+					end->data_len -= rxq->crc_len;
+				} else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = NULL;
+				end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..8f389f3
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,656 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+			pkt_mb4);
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+			pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+	unsigned int i = 0;
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (!rxq->pkt_first_seg &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	if (!rxq->pkt_first_seg) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			 ((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			 ((uint64_t)pkt->data_len <<
+			  AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+	uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+	nb_commit = nb_pkts;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	avf_vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 13/14] net/avf: enable bulk allocate Rx func
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (11 preceding siblings ...)
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-10  6:16         ` Wenzhuo Lu
  2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:16 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index b79bc5a..ea0f7d8 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -120,6 +120,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 127fdb5..d9f7cea 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index b542532..e0c4583 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -120,6 +120,27 @@
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -138,6 +159,11 @@
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -233,6 +259,17 @@
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -363,6 +400,19 @@
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1036,6 +1086,252 @@
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail register */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1467,6 +1763,10 @@
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 82fd801..d1701cd 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -83,6 +83,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v6 14/14] net/avf: enable Rx interrupt support
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (12 preceding siblings ...)
  2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
@ 2018-01-10  6:16         ` Wenzhuo Lu
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10  6:16 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Update the doc for the AVF features either.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini       |   1 +
 doc/guides/nics/features/avf_vec.ini   |   1 +
 doc/guides/nics/intel_vf.rst           |  20 +++-
 doc/guides/rel_notes/release_18_02.rst |  16 +++
 drivers/net/avf/avf_ethdev.c           | 204 +++++++++++++++++++++++++++------
 5 files changed, 204 insertions(+), 38 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index da4d81b..ccb9edd 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
index 45dd5e5..8924994 100644
--- a/doc/guides/nics/features/avf_vec.ini
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..66f90b1 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,22 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
+For more detail on SR-IOV, please refer to the following documents:
+
+*   `Intel® AVF HAS <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..0672b0e 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,22 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Add AVF (Adaptive Virtual Function) net PMD.**
+
+     A new net PMD has been added, which supports Intel® Ethernet Adaptive
+     Virtual Function (AVF) with features list below:
+
+     * Basic Rx/Tx burst
+     * SSE vectorized Rx/Tx burst
+     * Promiscuous mode
+     * MAC/VLAN offload
+     * Checksum offload
+     * TSO offload
+     * Jumbo frame and MTU setting
+     * RSS configuration
+     * stats
+     * Rx/Tx descriptor status
+     * Link status update/event
 
 API Changes
 -----------
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d9f7cea..13f6329 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		vf->nb_msix = 1;
+		if (vf->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			vf->msix_base = AVF_RX_VEC_START;
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+				      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+				      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			vf->msix_base = AVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval = avf_calc_itr_interval(
+					AVF_QUEUE_ITR_INTERVAL_MAX);
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+				      AVFINT_DYN_CTL01_INTENA_MASK |
+				      (AVF_ITR_INDEX_DEFAULT <<
+				       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				      (interval <<
+				       AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		AVF_WRITE_FLUSH(hw);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_queue;
 	}
 
-	/* Map interrupt for writeback */
-	vf->nb_msix = 1;
-	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
-		/* If WB_ON_ITR supports, enable it */
-		vf->msix_base = AVF_RX_VEC_START;
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
-			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
-			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
-	} else {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->msix_base = AVF_MISC_VEC_ID;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-	}
-	AVF_WRITE_FLUSH(hw);
-	/* map all queues to the same interrupt */
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		vf->rxq_map[0] |= 1 << i;
-	if (avf_config_irq_map(adapter)) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
 		goto err_queue;
 	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
 
 	/* Set all mac addrs */
 	avf_add_del_all_mac_addr(adapter, TRUE);
@@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else {
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+	}
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-10  9:44           ` Xing, Beilei
  0 siblings, 0 replies; 151+ messages in thread
From: Xing, Beilei @ 2018-01-10  9:44 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Wu, Jingjing



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Wednesday, January 10, 2018 2:16 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>
> Subject: [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update
> 
> From: Jingjing Wu <jingjing.wu@intel.com>
> 
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>

Acked-by: Beilei Xing <beilei.xing@intel.com>

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-10  1:38           ` Lu, Wenzhuo
@ 2018-01-10  9:57             ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-10  9:57 UTC (permalink / raw)
  To: Lu, Wenzhuo, dev; +Cc: Wu, Jingjing, Jerin Jacob

On 1/10/2018 1:38 AM, Lu, Wenzhuo wrote:
> Hi Ferruh,
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Wednesday, January 10, 2018 1:58 AM
>> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
>> Cc: Wu, Jingjing <jingjing.wu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx
>> func
>>
>> On 1/8/2018 5:13 AM, Wenzhuo Lu wrote:
>>> From: Jingjing Wu <jingjing.wu@intel.com>
>>>
>>> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
>>
>> <...>
>>
>>> @@ -31,5 +31,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) +=
>> avf_common.c
>>>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
>>>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
>>>  SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
>>> +SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
>>
>> You may need to wrap this with an arch check to not break other
>> architecture builds.
> I think we don't need to wrap it because I remember it's said SSE is supported by default.
> For example, before support ARM64 by this patch, 
> 'b20971b6cca0d01c41ff06e161581754810bfeb7 net/ixgbe: implement vector driver for ARM'
> Ixgbe makefile just looks like this.

By default CONFIG_RTE_LIBRTE_AVF_PMD=y and CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y,
so when you compile DPDK in an arm box, won't is cause a build error?

I think either this PMD or its vector mode should be disabled for other
architecture configs, or wrap the file with architecture check.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD
  2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
                           ` (13 preceding siblings ...)
  2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
@ 2018-01-10 13:01         ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
                             ` (14 more replies)
  14 siblings, 15 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Adaptive Virtual Function (AVF) Driver is VF driver which supports for all future Intel devices without requiring a VM update.
It promises the basic high speed connectivity. And since this happens to be an adaptive VF driver, every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those advanced features. Most importantly in a device agnostic way without ever compromising on the base functionality. All the AVF's interface need to follow AVF spec, and AVF compliant interface is supported start from the Intel? Ethernet Controller 710 Series.

This patch set adds AVF PMD supporting.
 - Device initialization
 - Queue setup and Device start
 - Basic Rx and Tx.
 - MAC address offload feature
 - Vlan offload feature
 - RSS offload feature
 - Vectored Rx and Tx func
 - Bulk allocate Rx func
 - Rx interrupt support
 - Statistics query

v7:
 - fix compile error on ARM machine.

v6:
 - handle ICC compile warning on 32bit machine.

v5:
 - some slight change for the comments.
 - merge the doc update patch.

v4:
 - update the base code to the newest.

v3:
 - change the license announcement.
 - update the related document.
 - resolve the checkpatch error, warning and some check.
 - handle the comments from the community.

v2:
 - rebase to 17.11
 - add vectored Rx and Tx func
 - add bulk allocate Rx func
 - add Rx interrupt support
 - add statistics query
 - fix coding style issue
 - remove extra compile flags in Makefile
 - add doc to list avf PMD features
 - fix lut setting when rss is disabled
 - fix log init missing
 - remove rx_descriptor_done

Jingjing Wu (12):
  net/avf/base: add base code for avf PMD
  net/avf: initialization of avf PMD
  net/avf: enable queue and device
  net/avf: enable link status update
  net/avf: support stats
  net/avf: enable MAC VLAN and promisc ops
  net/avf: enable ops for RSS setting
  net/avf: enable ops for MTU setting
  net/avf: enable ops to check queue info and status
  net/i40e: support AVF basic interface
  net/avf: enable sse vector Rx Tx func
  net/avf: enable Rx interrupt support

Wenzhuo Lu (2):
  net/avf: enable basic Rx Tx func
  net/avf: enable bulk allocate Rx func

 MAINTAINERS                             |    6 +
 config/common_base                      |   10 +
 doc/guides/nics/features/avf.ini        |   37 +
 doc/guides/nics/features/avf_vec.ini    |   37 +
 doc/guides/nics/intel_vf.rst            |   20 +-
 doc/guides/rel_notes/release_18_02.rst  |   16 +
 drivers/net/Makefile                    |    1 +
 drivers/net/avf/Makefile                |   54 +
 drivers/net/avf/avf.h                   |  219 +++
 drivers/net/avf/avf_ethdev.c            | 1451 ++++++++++++++++
 drivers/net/avf/avf_log.h               |   44 +
 drivers/net/avf/avf_rxtx.c              | 1959 +++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h              |  260 +++
 drivers/net/avf/avf_rxtx_vec_common.h   |  210 +++
 drivers/net/avf/avf_rxtx_vec_sse.c      |  656 +++++++
 drivers/net/avf/avf_vchnl.c             |  812 +++++++++
 drivers/net/avf/base/README             |   19 +
 drivers/net/avf/base/avf_adminq.c       | 1010 +++++++++++
 drivers/net/avf/base/avf_adminq.h       |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h   | 2842 +++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h        |   65 +
 drivers/net/avf/base/avf_common.c       | 1845 ++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h       |   43 +
 drivers/net/avf/base/avf_hmc.h          |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h      |  200 +++
 drivers/net/avf/base/avf_osdep.h        |  164 ++
 drivers/net/avf/base/avf_prototype.h    |  206 +++
 drivers/net/avf/base/avf_register.h     |  346 ++++
 drivers/net/avf/base/avf_status.h       |  108 ++
 drivers/net/avf/base/avf_type.h         | 2024 ++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h         |  787 +++++++++
 drivers/net/avf/rte_pmd_avf_version.map |    4 +
 drivers/net/i40e/i40e_ethdev.c          |   69 +-
 drivers/net/i40e/i40e_ethdev.h          |    5 +
 drivers/net/i40e/i40e_pf.c              |  140 +-
 drivers/net/i40e/i40e_pf.h              |    6 +
 mk/rte.app.mk                           |    1 +
 37 files changed, 16060 insertions(+), 27 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 01/14] net/avf/base: add base code for avf PMD
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 02/14] net/avf: initialization of " Wenzhuo Lu
                             ` (13 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu, Wenzhuo Lu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                           |    5 +
 drivers/net/avf/avf_log.h             |   23 +
 drivers/net/avf/base/README           |   19 +
 drivers/net/avf/base/avf_adminq.c     | 1010 ++++++++++++
 drivers/net/avf/base/avf_adminq.h     |  166 ++
 drivers/net/avf/base/avf_adminq_cmd.h | 2842 +++++++++++++++++++++++++++++++++
 drivers/net/avf/base/avf_alloc.h      |   65 +
 drivers/net/avf/base/avf_common.c     | 1845 +++++++++++++++++++++
 drivers/net/avf/base/avf_devids.h     |   43 +
 drivers/net/avf/base/avf_hmc.h        |  245 +++
 drivers/net/avf/base/avf_lan_hmc.h    |  200 +++
 drivers/net/avf/base/avf_osdep.h      |  164 ++
 drivers/net/avf/base/avf_prototype.h  |  206 +++
 drivers/net/avf/base/avf_register.h   |  346 ++++
 drivers/net/avf/base/avf_status.h     |  108 ++
 drivers/net/avf/base/avf_type.h       | 2024 +++++++++++++++++++++++
 drivers/net/avf/base/virtchnl.h       |  787 +++++++++
 17 files changed, 10098 insertions(+)
 create mode 100644 drivers/net/avf/avf_log.h
 create mode 100644 drivers/net/avf/base/README
 create mode 100644 drivers/net/avf/base/avf_adminq.c
 create mode 100644 drivers/net/avf/base/avf_adminq.h
 create mode 100644 drivers/net/avf/base/avf_adminq_cmd.h
 create mode 100644 drivers/net/avf/base/avf_alloc.h
 create mode 100644 drivers/net/avf/base/avf_common.c
 create mode 100644 drivers/net/avf/base/avf_devids.h
 create mode 100644 drivers/net/avf/base/avf_hmc.h
 create mode 100644 drivers/net/avf/base/avf_lan_hmc.h
 create mode 100644 drivers/net/avf/base/avf_osdep.h
 create mode 100644 drivers/net/avf/base/avf_prototype.h
 create mode 100644 drivers/net/avf/base/avf_register.h
 create mode 100644 drivers/net/avf/base/avf_status.h
 create mode 100644 drivers/net/avf/base/avf_type.h
 create mode 100644 drivers/net/avf/base/virtchnl.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e0199b1..17f15b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,11 @@ M: Xiao Wang <xiao.w.wang@intel.com>
 F: drivers/net/fm10k/
 F: doc/guides/nics/features/fm10k*.ini
 
+Intel avf
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Wenzhuo Lu <wenzhuo.lu@intel.com>
+F: drivers/net/avf/
+
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
 F: drivers/net/mlx4/
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
new file mode 100644
index 0000000..e3f106b
--- /dev/null
+++ b/drivers/net/avf/avf_log.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_LOG_H_
+#define _AVF_LOG_H_
+
+extern int avf_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \
+		__func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+extern int avf_logtype_driver;
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \
+		__func__, ## args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/base/README b/drivers/net/avf/base/README
new file mode 100644
index 0000000..4710ae2
--- /dev/null
+++ b/drivers/net/avf/base/README
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+Intel® AVF driver
+=================
+
+This directory contains source code of FreeBSD AVF driver of version
+cid-avf.2018.01.02.tar.gz released by the team which develops
+basic drivers for any AVF NIC. The directory of base/ contains the
+original source package.
+
+Updating the driver
+===================
+
+NOTE: The source code in this directory should not be modified apart from
+the following file(s):
+
+    avf_osdep.h
diff --git a/drivers/net/avf/base/avf_adminq.c b/drivers/net/avf/base/avf_adminq.c
new file mode 100644
index 0000000..616e2a9
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.c
@@ -0,0 +1,1010 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_status.h"
+#include "avf_type.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+
+/**
+ *  avf_adminq_init_regs - Initialize AdminQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the alloc_asq and alloc_arq functions have already been called
+ **/
+STATIC void avf_adminq_init_regs(struct avf_hw *hw)
+{
+	/* set head and tail registers in our local struct */
+	if (avf_is_vf(hw)) {
+		hw->aq.asq.tail = AVF_ATQT1;
+		hw->aq.asq.head = AVF_ATQH1;
+		hw->aq.asq.len  = AVF_ATQLEN1;
+		hw->aq.asq.bal  = AVF_ATQBAL1;
+		hw->aq.asq.bah  = AVF_ATQBAH1;
+		hw->aq.arq.tail = AVF_ARQT1;
+		hw->aq.arq.head = AVF_ARQH1;
+		hw->aq.arq.len  = AVF_ARQLEN1;
+		hw->aq.arq.bal  = AVF_ARQBAL1;
+		hw->aq.arq.bah  = AVF_ARQBAH1;
+	}
+}
+
+/**
+ *  avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
+					 avf_mem_atq_ring,
+					 (hw->aq.num_asq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+	if (ret_code)
+		return ret_code;
+
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
+					  (hw->aq.num_asq_entries *
+					  sizeof(struct avf_asq_cmd_details)));
+	if (ret_code) {
+		avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+		return ret_code;
+	}
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
+					 avf_mem_arq_ring,
+					 (hw->aq.num_arq_entries *
+					 sizeof(struct avf_aq_desc)),
+					 AVF_ADMINQ_DESC_ALIGNMENT);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_adminq_asq - Free Admin Queue send rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted send buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_asq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+}
+
+/**
+ *  avf_free_adminq_arq - Free Admin Queue receive rings
+ *  @hw: pointer to the hardware structure
+ *
+ *  This assumes the posted receive buffers have already been cleaned
+ *  and de-allocated
+ **/
+void avf_free_adminq_arq(struct avf_hw *hw)
+{
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+}
+
+/**
+ *  avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* We'll be allocating the buffer info memory first, then we can
+	 * allocate the mapped buffers for the event processing
+	 */
+
+	/* buffer_info structures do not need alignment */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head,
+		(hw->aq.num_arq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_arq_bufs;
+	hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_arq_entries; i++) {
+		bi = &hw->aq.arq.r.arq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_arq_buf,
+						 hw->aq.arq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_arq_bufs;
+
+		/* now configure the descriptors for use */
+		desc = AVF_ADMINQ_DESC(hw->aq.arq, i);
+
+		desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+		if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+		desc->opcode = 0;
+		/* This is in accordance with Admin queue design, there is no
+		 * register for buffer size configuration
+		 */
+		desc->datalen = CPU_TO_LE16((u16)bi->size);
+		desc->retval = 0;
+		desc->cookie_high = 0;
+		desc->cookie_low = 0;
+		desc->params.external.addr_high =
+			CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+		desc->params.external.addr_low =
+			CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+		desc->params.external.param0 = 0;
+		desc->params.external.param1 = 0;
+	}
+
+alloc_arq_bufs:
+	return ret_code;
+
+unwind_alloc_arq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+	struct avf_dma_mem *bi;
+	int i;
+
+	/* No mapped memory needed yet, just the buffer info structures */
+	ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head,
+		(hw->aq.num_asq_entries * sizeof(struct avf_dma_mem)));
+	if (ret_code)
+		goto alloc_asq_bufs;
+	hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va;
+
+	/* allocate the mapped buffers */
+	for (i = 0; i < hw->aq.num_asq_entries; i++) {
+		bi = &hw->aq.asq.r.asq_bi[i];
+		ret_code = avf_allocate_dma_mem(hw, bi,
+						 avf_mem_asq_buf,
+						 hw->aq.asq_buf_size,
+						 AVF_ADMINQ_DESC_ALIGNMENT);
+		if (ret_code)
+			goto unwind_alloc_asq_bufs;
+	}
+alloc_asq_bufs:
+	return ret_code;
+
+unwind_alloc_asq_bufs:
+	/* don't try to free the one that failed... */
+	i--;
+	for (; i >= 0; i--)
+		avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+
+	return ret_code;
+}
+
+/**
+ *  avf_free_arq_bufs - Free receive queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_arq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* free descriptors */
+	for (i = 0; i < hw->aq.num_arq_entries; i++)
+		avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+}
+
+/**
+ *  avf_free_asq_bufs - Free send queue buffer info elements
+ *  @hw: pointer to the hardware structure
+ **/
+STATIC void avf_free_asq_bufs(struct avf_hw *hw)
+{
+	int i;
+
+	/* only unmap if the address is non-NULL */
+	for (i = 0; i < hw->aq.num_asq_entries; i++)
+		if (hw->aq.asq.r.asq_bi[i].pa)
+			avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
+
+	/* free the buffer info list */
+	avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf);
+
+	/* free the descriptor memory */
+	avf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
+
+	/* free the dma header */
+	avf_free_virt_mem(hw, &hw->aq.asq.dma_head);
+}
+
+/**
+ *  avf_config_asq_regs - configure ASQ registers
+ *  @hw: pointer to the hardware structure
+ *
+ *  Configure base address and length registers for the transmit queue
+ **/
+STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+					  AVF_ATQLEN1_ATQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries |
+				  AVF_ATQLEN1_ATQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa));
+	wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa));
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.asq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_config_arq_regs - ARQ register configuration
+ *  @hw: pointer to the hardware structure
+ *
+ * Configure base address and length registers for the receive (event queue)
+ **/
+STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u32 reg = 0;
+
+	/* Clear Head and Tail */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+
+	/* set starting point */
+#ifdef INTEGRATED_VF
+	if (avf_is_vf(hw))
+		wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+					  AVF_ARQLEN1_ARQENABLE_MASK));
+#else
+	wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries |
+				  AVF_ARQLEN1_ARQENABLE_MASK));
+#endif /* INTEGRATED_VF */
+	wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa));
+	wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa));
+
+	/* Update tail in the HW to post pre-allocated buffers */
+	wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1);
+
+	/* Check one register to verify that config was applied */
+	reg = rd32(hw, hw->aq.arq.bal);
+	if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa))
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+
+	return ret_code;
+}
+
+/**
+ *  avf_init_asq - main initialization routine for ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  This is the main initialization routine for the Admin Send Queue
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.asq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.asq.next_to_use = 0;
+	hw->aq.asq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_asq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_asq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_asq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.asq.count = hw->aq.num_asq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_asq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_init_arq - initialize ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main initialization routine for the Admin Receive (Event) Queue.
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.arq_buf_size
+ *
+ *  Do *NOT* hold the lock when calling this as the memory allocation routines
+ *  called are not going to be atomic context safe
+ **/
+enum avf_status_code avf_init_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (hw->aq.arq.count > 0) {
+		/* queue already initialized */
+		ret_code = AVF_ERR_NOT_READY;
+		goto init_adminq_exit;
+	}
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+
+	hw->aq.arq.next_to_use = 0;
+	hw->aq.arq.next_to_clean = 0;
+
+	/* allocate the ring memory */
+	ret_code = avf_alloc_adminq_arq_ring(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_exit;
+
+	/* allocate buffers in the rings */
+	ret_code = avf_alloc_arq_bufs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* initialize base registers */
+	ret_code = avf_config_arq_regs(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_rings;
+
+	/* success! */
+	hw->aq.arq.count = hw->aq.num_arq_entries;
+	goto init_adminq_exit;
+
+init_adminq_free_rings:
+	avf_free_adminq_arq(hw);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_asq - shutdown the ASQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Send Queue
+ **/
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	if (hw->aq.asq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_asq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.asq.head, 0);
+	wr32(hw, hw->aq.asq.tail, 0);
+	wr32(hw, hw->aq.asq.len, 0);
+	wr32(hw, hw->aq.asq.bal, 0);
+	wr32(hw, hw->aq.asq.bah, 0);
+
+	hw->aq.asq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_asq_bufs(hw);
+
+shutdown_asq_out:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_arq - shutdown ARQ
+ *  @hw: pointer to the hardware structure
+ *
+ *  The main shutdown routine for the Admin Receive Queue
+ **/
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		ret_code = AVF_ERR_NOT_READY;
+		goto shutdown_arq_out;
+	}
+
+	/* Stop firmware AdminQ processing */
+	wr32(hw, hw->aq.arq.head, 0);
+	wr32(hw, hw->aq.arq.tail, 0);
+	wr32(hw, hw->aq.arq.len, 0);
+	wr32(hw, hw->aq.arq.bal, 0);
+	wr32(hw, hw->aq.arq.bah, 0);
+
+	hw->aq.arq.count = 0; /* to indicate uninitialized queue */
+
+	/* free ring buffers */
+	avf_free_arq_bufs(hw);
+
+shutdown_arq_out:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+	return ret_code;
+}
+
+/**
+ *  avf_init_adminq - main initialization routine for Admin Queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  Prior to calling this function, drivers *MUST* set the following fields
+ *  in the hw->aq structure:
+ *     - hw->aq.num_asq_entries
+ *     - hw->aq.num_arq_entries
+ *     - hw->aq.arq_buf_size
+ *     - hw->aq.asq_buf_size
+ **/
+enum avf_status_code avf_init_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code;
+
+	/* verify input for valid configuration */
+	if ((hw->aq.num_arq_entries == 0) ||
+	    (hw->aq.num_asq_entries == 0) ||
+	    (hw->aq.arq_buf_size == 0) ||
+	    (hw->aq.asq_buf_size == 0)) {
+		ret_code = AVF_ERR_CONFIG;
+		goto init_adminq_exit;
+	}
+	avf_init_spinlock(&hw->aq.asq_spinlock);
+	avf_init_spinlock(&hw->aq.arq_spinlock);
+
+	/* Set up register offsets */
+	avf_adminq_init_regs(hw);
+
+	/* setup ASQ command write back timeout */
+	hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT;
+
+	/* allocate the ASQ */
+	ret_code = avf_init_asq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_destroy_spinlocks;
+
+	/* allocate the ARQ */
+	ret_code = avf_init_arq(hw);
+	if (ret_code != AVF_SUCCESS)
+		goto init_adminq_free_asq;
+
+	ret_code = AVF_SUCCESS;
+
+	/* success! */
+	goto init_adminq_exit;
+
+init_adminq_free_asq:
+	avf_shutdown_asq(hw);
+init_adminq_destroy_spinlocks:
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+init_adminq_exit:
+	return ret_code;
+}
+
+/**
+ *  avf_shutdown_adminq - shutdown routine for the Admin Queue
+ *  @hw: pointer to the hardware structure
+ **/
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+
+	if (avf_check_asq_alive(hw))
+		avf_aq_queue_shutdown(hw, true);
+
+	avf_shutdown_asq(hw);
+	avf_shutdown_arq(hw);
+	avf_destroy_spinlock(&hw->aq.asq_spinlock);
+	avf_destroy_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->nvm_buff.va)
+		avf_free_virt_mem(hw, &hw->nvm_buff);
+
+	return ret_code;
+}
+
+/**
+ *  avf_clean_asq - cleans Admin send queue
+ *  @hw: pointer to the hardware structure
+ *
+ *  returns the number of free desc
+ **/
+u16 avf_clean_asq(struct avf_hw *hw)
+{
+	struct avf_adminq_ring *asq = &(hw->aq.asq);
+	struct avf_asq_cmd_details *details;
+	u16 ntc = asq->next_to_clean;
+	struct avf_aq_desc desc_cb;
+	struct avf_aq_desc *desc;
+
+	desc = AVF_ADMINQ_DESC(*asq, ntc);
+	details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	while (rd32(hw, hw->aq.asq.head) != ntc) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+
+		if (details->callback) {
+			AVF_ADMINQ_CALLBACK cb_func =
+					(AVF_ADMINQ_CALLBACK)details->callback;
+			avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc),
+				    AVF_DMA_TO_DMA);
+			cb_func(hw, &desc_cb);
+		}
+		avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM);
+		avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM);
+		ntc++;
+		if (ntc == asq->count)
+			ntc = 0;
+		desc = AVF_ADMINQ_DESC(*asq, ntc);
+		details = AVF_ADMINQ_DETAILS(*asq, ntc);
+	}
+
+	asq->next_to_clean = ntc;
+
+	return AVF_DESC_UNUSED(asq);
+}
+
+/**
+ *  avf_asq_done - check if FW has processed the Admin Send Queue
+ *  @hw: pointer to the hw struct
+ *
+ *  Returns true if the firmware has processed all descriptors on the
+ *  admin send queue. Returns false if there are still requests pending.
+ **/
+bool avf_asq_done(struct avf_hw *hw)
+{
+	/* AQ designers suggest use of head for better
+	 * timing reliability than DD bit
+	 */
+	return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use;
+
+}
+
+/**
+ *  avf_asq_send_command - send command to Admin Queue
+ *  @hw: pointer to the hw struct
+ *  @desc: prefilled descriptor describing the command (non DMA mem)
+ *  @buff: buffer to use for indirect commands
+ *  @buff_size: size of buffer for indirect commands
+ *  @cmd_details: pointer to command details structure
+ *
+ *  This is the main send command driver routine for the Admin Queue send
+ *  queue.  It runs the queue, cleans the queue, etc
+ **/
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_dma_mem *dma_buff = NULL;
+	struct avf_asq_cmd_details *details;
+	struct avf_aq_desc *desc_on_ring;
+	bool cmd_completed = false;
+	u16  retval = 0;
+	u32  val = 0;
+
+	avf_acquire_spinlock(&hw->aq.asq_spinlock);
+
+	hw->aq.asq_last_status = AVF_AQ_RC_OK;
+
+	if (hw->aq.asq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Admin queue not initialized.\n");
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	val = rd32(hw, hw->aq.asq.head);
+	if (val >= hw->aq.num_asq_entries) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: head overrun at %d\n", val);
+		status = AVF_ERR_QUEUE_EMPTY;
+		goto asq_send_command_error;
+	}
+
+	details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+	if (cmd_details) {
+		avf_memcpy(details,
+			    cmd_details,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_TO_NONDMA);
+
+		/* If the cmd_details are defined copy the cookie.  The
+		 * CPU_TO_LE32 is not needed here because the data is ignored
+		 * by the FW, only used by the driver
+		 */
+		if (details->cookie) {
+			desc->cookie_high =
+				CPU_TO_LE32(AVF_HI_DWORD(details->cookie));
+			desc->cookie_low =
+				CPU_TO_LE32(AVF_LO_DWORD(details->cookie));
+		}
+	} else {
+		avf_memset(details, 0,
+			    sizeof(struct avf_asq_cmd_details),
+			    AVF_NONDMA_MEM);
+	}
+
+	/* clear requested flags and then set additional flags if defined */
+	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
+	desc->flags |= CPU_TO_LE16(details->flags_ena);
+
+	if (buff_size > hw->aq.asq_buf_size) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Invalid buffer size: %d.\n",
+			   buff_size);
+		status = AVF_ERR_INVALID_SIZE;
+		goto asq_send_command_error;
+	}
+
+	if (details->postpone && !details->async) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Async flag not set along with postpone flag");
+		status = AVF_ERR_PARAM;
+		goto asq_send_command_error;
+	}
+
+	/* call clean and check queue available function to reclaim the
+	 * descriptors that were processed by FW, the function returns the
+	 * number of desc available
+	 */
+	/* the clean function called here could be called in a separate thread
+	 * in case of asynchronous completions
+	 */
+	if (avf_clean_asq(hw) == 0) {
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQTX: Error queue is full.\n");
+		status = AVF_ERR_ADMIN_QUEUE_FULL;
+		goto asq_send_command_error;
+	}
+
+	/* initialize the temp desc pointer with the right desc */
+	desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
+
+	/* if the desc is available copy the temp desc to the right place */
+	avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_TO_DMA);
+
+	/* if buff is not NULL assume indirect command */
+	if (buff != NULL) {
+		dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]);
+		/* copy the user buff into the respective DMA buff */
+		avf_memcpy(dma_buff->va, buff, buff_size,
+			    AVF_NONDMA_TO_DMA);
+		desc_on_ring->datalen = CPU_TO_LE16(buff_size);
+
+		/* Update the address values in the desc with the pa value
+		 * for respective buffer
+		 */
+		desc_on_ring->params.external.addr_high =
+				CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa));
+		desc_on_ring->params.external.addr_low =
+				CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa));
+	}
+
+	/* bump the tail */
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
+		      buff, buff_size);
+	(hw->aq.asq.next_to_use)++;
+	if (hw->aq.asq.next_to_use == hw->aq.asq.count)
+		hw->aq.asq.next_to_use = 0;
+	if (!details->postpone)
+		wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use);
+
+	/* if cmd_details are not defined or async flag is not set,
+	 * we need to wait for desc write back
+	 */
+	if (!details->async && !details->postpone) {
+		u32 total_delay = 0;
+
+		do {
+			/* AQ designers suggest use of head for better
+			 * timing reliability than DD bit
+			 */
+			if (avf_asq_done(hw))
+				break;
+			avf_usec_delay(50);
+			total_delay += 50;
+		} while (total_delay < hw->aq.asq_cmd_timeout);
+	}
+
+	/* if ready, copy the desc back to temp */
+	if (avf_asq_done(hw)) {
+		avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc),
+			    AVF_DMA_TO_NONDMA);
+		if (buff != NULL)
+			avf_memcpy(buff, dma_buff->va, buff_size,
+				    AVF_DMA_TO_NONDMA);
+		retval = LE16_TO_CPU(desc->retval);
+		if (retval != 0) {
+			avf_debug(hw,
+				   AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Command completed with error 0x%X.\n",
+				   retval);
+
+			/* strip off FW internal code */
+			retval &= 0xff;
+		}
+		cmd_completed = true;
+		if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK)
+			status = AVF_SUCCESS;
+		else
+			status = AVF_ERR_ADMIN_QUEUE_ERROR;
+		hw->aq.asq_last_status = (enum avf_admin_queue_err)retval;
+	}
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+		   "AQTX: desc and buffer writeback:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
+
+	/* save writeback aq if requested */
+	if (details->wb_desc)
+		avf_memcpy(details->wb_desc, desc_on_ring,
+			    sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA);
+
+	/* update the error if time out occurred */
+	if ((!cmd_completed) &&
+	    (!details->async && !details->postpone)) {
+		if (rd32(hw, hw->aq.asq.len) & AVF_ATQLEN1_ATQCRIT_MASK) {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: AQ Critical error.\n");
+			status = AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
+		} else {
+			avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+				   "AQTX: Writeback timeout.\n");
+			status = AVF_ERR_ADMIN_QUEUE_TIMEOUT;
+		}
+	}
+
+asq_send_command_error:
+	avf_release_spinlock(&hw->aq.asq_spinlock);
+	return status;
+}
+
+/**
+ *  avf_fill_default_direct_cmd_desc - AQ descriptor helper function
+ *  @desc:     pointer to the temp descriptor (non DMA mem)
+ *  @opcode:   the opcode can be used to decide which flags to turn off or on
+ *
+ *  Fill the desc with default values
+ **/
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode)
+{
+	/* zero out the desc */
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc),
+		    AVF_NONDMA_MEM);
+	desc->opcode = CPU_TO_LE16(opcode);
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI);
+}
+
+/**
+ *  avf_clean_arq_element
+ *  @hw: pointer to the hw struct
+ *  @e: event info from the receive descriptor, includes any buffers
+ *  @pending: number of events that could be left to process
+ *
+ *  This function cleans one Admin Receive Queue element and returns
+ *  the contents through e.  It can also return how many events are
+ *  left to process through 'pending'
+ **/
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *pending)
+{
+	enum avf_status_code ret_code = AVF_SUCCESS;
+	u16 ntc = hw->aq.arq.next_to_clean;
+	struct avf_aq_desc *desc;
+	struct avf_dma_mem *bi;
+	u16 desc_idx;
+	u16 datalen;
+	u16 flags;
+	u16 ntu;
+
+	/* pre-clean the event info */
+	avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM);
+
+	/* take the lock before we start messing with the ring */
+	avf_acquire_spinlock(&hw->aq.arq_spinlock);
+
+	if (hw->aq.arq.count == 0) {
+		avf_debug(hw, AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Admin queue not initialized.\n");
+		ret_code = AVF_ERR_QUEUE_EMPTY;
+		goto clean_arq_element_err;
+	}
+
+	/* set next_to_use to head */
+#ifdef INTEGRATED_VF
+	if (!avf_is_vf(hw))
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_PF_ARQH_ARQH_MASK;
+	else
+		ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#else
+	ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK;
+#endif /* INTEGRATED_VF */
+	if (ntu == ntc) {
+		/* nothing to do - shouldn't need to update ring's values */
+		ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto clean_arq_element_out;
+	}
+
+	/* now clean the next descriptor */
+	desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc);
+	desc_idx = ntc;
+
+	hw->aq.arq_last_status =
+		(enum avf_admin_queue_err)LE16_TO_CPU(desc->retval);
+	flags = LE16_TO_CPU(desc->flags);
+	if (flags & AVF_AQ_FLAG_ERR) {
+		ret_code = AVF_ERR_ADMIN_QUEUE_ERROR;
+		avf_debug(hw,
+			   AVF_DEBUG_AQ_MESSAGE,
+			   "AQRX: Event received with error 0x%X.\n",
+			   hw->aq.arq_last_status);
+	}
+
+	avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc),
+		    AVF_DMA_TO_NONDMA);
+	datalen = LE16_TO_CPU(desc->datalen);
+	e->msg_len = min(datalen, e->buf_len);
+	if (e->msg_buf != NULL && (e->msg_len != 0))
+		avf_memcpy(e->msg_buf,
+			    hw->aq.arq.r.arq_bi[desc_idx].va,
+			    e->msg_len, AVF_DMA_TO_NONDMA);
+
+	avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+	avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
+		      hw->aq.arq_buf_size);
+
+	/* Restore the original datalen and buffer address in the desc,
+	 * FW updates datalen to indicate the event message
+	 * size
+	 */
+	bi = &hw->aq.arq.r.arq_bi[ntc];
+	avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM);
+
+	desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF);
+	if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF)
+		desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB);
+	desc->datalen = CPU_TO_LE16((u16)bi->size);
+	desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa));
+	desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa));
+
+	/* set tail = the last cleaned desc index. */
+	wr32(hw, hw->aq.arq.tail, ntc);
+	/* ntc is updated to tail + 1 */
+	ntc++;
+	if (ntc == hw->aq.num_arq_entries)
+		ntc = 0;
+	hw->aq.arq.next_to_clean = ntc;
+	hw->aq.arq.next_to_use = ntu;
+
+clean_arq_element_out:
+	/* Set pending if needed, unlock and return */
+	if (pending != NULL)
+		*pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);
+clean_arq_element_err:
+	avf_release_spinlock(&hw->aq.arq_spinlock);
+
+	return ret_code;
+}
+
diff --git a/drivers/net/avf/base/avf_adminq.h b/drivers/net/avf/base/avf_adminq.h
new file mode 100644
index 0000000..d7d242a
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq.h
@@ -0,0 +1,166 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_H_
+#define _AVF_ADMINQ_H_
+
+#include "avf_osdep.h"
+#include "avf_status.h"
+#include "avf_adminq_cmd.h"
+
+#define AVF_ADMINQ_DESC(R, i)   \
+	(&(((struct avf_aq_desc *)((R).desc_buf.va))[i]))
+
+#define AVF_ADMINQ_DESC_ALIGNMENT 4096
+
+struct avf_adminq_ring {
+	struct avf_virt_mem dma_head;	/* space for dma structures */
+	struct avf_dma_mem desc_buf;	/* descriptor ring memory */
+	struct avf_virt_mem cmd_buf;	/* command buffer memory */
+
+	union {
+		struct avf_dma_mem *asq_bi;
+		struct avf_dma_mem *arq_bi;
+	} r;
+
+	u16 count;		/* Number of descriptors */
+	u16 rx_buf_len;		/* Admin Receive Queue buffer length */
+
+	/* used for interrupt processing */
+	u16 next_to_use;
+	u16 next_to_clean;
+
+	/* used for queue tracking */
+	u32 head;
+	u32 tail;
+	u32 len;
+	u32 bah;
+	u32 bal;
+};
+
+/* ASQ transaction details */
+struct avf_asq_cmd_details {
+	void *callback; /* cast from type AVF_ADMINQ_CALLBACK */
+	u64 cookie;
+	u16 flags_ena;
+	u16 flags_dis;
+	bool async;
+	bool postpone;
+	struct avf_aq_desc *wb_desc;
+};
+
+#define AVF_ADMINQ_DETAILS(R, i)   \
+	(&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i]))
+
+/* ARQ event information */
+struct avf_arq_event_info {
+	struct avf_aq_desc desc;
+	u16 msg_len;
+	u16 buf_len;
+	u8 *msg_buf;
+};
+
+/* Admin Queue information */
+struct avf_adminq_info {
+	struct avf_adminq_ring arq;    /* receive queue */
+	struct avf_adminq_ring asq;    /* send queue */
+	u32 asq_cmd_timeout;            /* send queue cmd write back timeout*/
+	u16 num_arq_entries;            /* receive queue depth */
+	u16 num_asq_entries;            /* send queue depth */
+	u16 arq_buf_size;               /* receive queue buffer size */
+	u16 asq_buf_size;               /* send queue buffer size */
+	u16 fw_maj_ver;                 /* firmware major version */
+	u16 fw_min_ver;                 /* firmware minor version */
+	u32 fw_build;                   /* firmware build number */
+	u16 api_maj_ver;                /* api major version */
+	u16 api_min_ver;                /* api minor version */
+
+	struct avf_spinlock asq_spinlock; /* Send queue spinlock */
+	struct avf_spinlock arq_spinlock; /* Receive queue spinlock */
+
+	/* last status values on send and receive queues */
+	enum avf_admin_queue_err asq_last_status;
+	enum avf_admin_queue_err arq_last_status;
+};
+
+/**
+ * avf_aq_rc_to_posix - convert errors to user-land codes
+ * aq_ret: AdminQ handler error code can override aq_rc
+ * aq_rc: AdminQ firmware error code to convert
+ **/
+STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc)
+{
+	int aq_to_posix[] = {
+		0,           /* AVF_AQ_RC_OK */
+		-EPERM,      /* AVF_AQ_RC_EPERM */
+		-ENOENT,     /* AVF_AQ_RC_ENOENT */
+		-ESRCH,      /* AVF_AQ_RC_ESRCH */
+		-EINTR,      /* AVF_AQ_RC_EINTR */
+		-EIO,        /* AVF_AQ_RC_EIO */
+		-ENXIO,      /* AVF_AQ_RC_ENXIO */
+		-E2BIG,      /* AVF_AQ_RC_E2BIG */
+		-EAGAIN,     /* AVF_AQ_RC_EAGAIN */
+		-ENOMEM,     /* AVF_AQ_RC_ENOMEM */
+		-EACCES,     /* AVF_AQ_RC_EACCES */
+		-EFAULT,     /* AVF_AQ_RC_EFAULT */
+		-EBUSY,      /* AVF_AQ_RC_EBUSY */
+		-EEXIST,     /* AVF_AQ_RC_EEXIST */
+		-EINVAL,     /* AVF_AQ_RC_EINVAL */
+		-ENOTTY,     /* AVF_AQ_RC_ENOTTY */
+		-ENOSPC,     /* AVF_AQ_RC_ENOSPC */
+		-ENOSYS,     /* AVF_AQ_RC_ENOSYS */
+		-ERANGE,     /* AVF_AQ_RC_ERANGE */
+		-EPIPE,      /* AVF_AQ_RC_EFLUSHED */
+		-ESPIPE,     /* AVF_AQ_RC_BAD_ADDR */
+		-EROFS,      /* AVF_AQ_RC_EMODE */
+		-EFBIG,      /* AVF_AQ_RC_EFBIG */
+	};
+
+	/* aq_rc is invalid if AQ timed out */
+	if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT)
+		return -EAGAIN;
+
+	if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
+		return -ERANGE;
+
+	return aq_to_posix[aq_rc];
+}
+
+/* general information */
+#define AVF_AQ_LARGE_BUF	512
+#define AVF_ASQ_CMD_TIMEOUT	250000  /* usecs */
+
+void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc,
+				       u16 opcode);
+
+#endif /* _AVF_ADMINQ_H_ */
diff --git a/drivers/net/avf/base/avf_adminq_cmd.h b/drivers/net/avf/base/avf_adminq_cmd.h
new file mode 100644
index 0000000..1709f31
--- /dev/null
+++ b/drivers/net/avf/base/avf_adminq_cmd.h
@@ -0,0 +1,2842 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ADMINQ_CMD_H_
+#define _AVF_ADMINQ_CMD_H_
+
+/* This header file defines the avf Admin Queue commands and is shared between
+ * avf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+
+#define AVF_FW_API_VERSION_MAJOR	0x0001
+#define AVF_FW_API_VERSION_MINOR_X722	0x0005
+#define AVF_FW_API_VERSION_MINOR_X710	0x0007
+
+#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \
+					AVF_FW_API_VERSION_MINOR_X710 : \
+					AVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs  */
+#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct avf_aq_desc {
+	__le16 flags;
+	__le16 opcode;
+	__le16 datalen;
+	__le16 retval;
+	__le32 cookie_high;
+	__le32 cookie_low;
+	union {
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 param2;
+			__le32 param3;
+		} internal;
+		struct {
+			__le32 param0;
+			__le32 param1;
+			__le32 addr_high;
+			__le32 addr_low;
+		} external;
+		u8 raw[16];
+	} params;
+};
+
+/* Flags sub-structure
+ * |0  |1  |2  |3  |4  |5  |6  |7  |8  |9  |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * *  RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define AVF_AQ_FLAG_DD_SHIFT	0
+#define AVF_AQ_FLAG_CMP_SHIFT	1
+#define AVF_AQ_FLAG_ERR_SHIFT	2
+#define AVF_AQ_FLAG_VFE_SHIFT	3
+#define AVF_AQ_FLAG_LB_SHIFT	9
+#define AVF_AQ_FLAG_RD_SHIFT	10
+#define AVF_AQ_FLAG_VFC_SHIFT	11
+#define AVF_AQ_FLAG_BUF_SHIFT	12
+#define AVF_AQ_FLAG_SI_SHIFT	13
+#define AVF_AQ_FLAG_EI_SHIFT	14
+#define AVF_AQ_FLAG_FE_SHIFT	15
+
+#define AVF_AQ_FLAG_DD		(1 << AVF_AQ_FLAG_DD_SHIFT)  /* 0x1    */
+#define AVF_AQ_FLAG_CMP	(1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2    */
+#define AVF_AQ_FLAG_ERR	(1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4    */
+#define AVF_AQ_FLAG_VFE	(1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8    */
+#define AVF_AQ_FLAG_LB		(1 << AVF_AQ_FLAG_LB_SHIFT)  /* 0x200  */
+#define AVF_AQ_FLAG_RD		(1 << AVF_AQ_FLAG_RD_SHIFT)  /* 0x400  */
+#define AVF_AQ_FLAG_VFC	(1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800  */
+#define AVF_AQ_FLAG_BUF	(1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define AVF_AQ_FLAG_SI		(1 << AVF_AQ_FLAG_SI_SHIFT)  /* 0x2000 */
+#define AVF_AQ_FLAG_EI		(1 << AVF_AQ_FLAG_EI_SHIFT)  /* 0x4000 */
+#define AVF_AQ_FLAG_FE		(1 << AVF_AQ_FLAG_FE_SHIFT)  /* 0x8000 */
+
+/* error codes */
+enum avf_admin_queue_err {
+	AVF_AQ_RC_OK		= 0,  /* success */
+	AVF_AQ_RC_EPERM	= 1,  /* Operation not permitted */
+	AVF_AQ_RC_ENOENT	= 2,  /* No such element */
+	AVF_AQ_RC_ESRCH	= 3,  /* Bad opcode */
+	AVF_AQ_RC_EINTR	= 4,  /* operation interrupted */
+	AVF_AQ_RC_EIO		= 5,  /* I/O error */
+	AVF_AQ_RC_ENXIO	= 6,  /* No such resource */
+	AVF_AQ_RC_E2BIG	= 7,  /* Arg too long */
+	AVF_AQ_RC_EAGAIN	= 8,  /* Try again */
+	AVF_AQ_RC_ENOMEM	= 9,  /* Out of memory */
+	AVF_AQ_RC_EACCES	= 10, /* Permission denied */
+	AVF_AQ_RC_EFAULT	= 11, /* Bad address */
+	AVF_AQ_RC_EBUSY	= 12, /* Device or resource busy */
+	AVF_AQ_RC_EEXIST	= 13, /* object already exists */
+	AVF_AQ_RC_EINVAL	= 14, /* Invalid argument */
+	AVF_AQ_RC_ENOTTY	= 15, /* Not a typewriter */
+	AVF_AQ_RC_ENOSPC	= 16, /* No space left or alloc failure */
+	AVF_AQ_RC_ENOSYS	= 17, /* Function not implemented */
+	AVF_AQ_RC_ERANGE	= 18, /* Parameter out of range */
+	AVF_AQ_RC_EFLUSHED	= 19, /* Cmd flushed due to prev cmd error */
+	AVF_AQ_RC_BAD_ADDR	= 20, /* Descriptor contains a bad pointer */
+	AVF_AQ_RC_EMODE	= 21, /* Op not allowed in current dev mode */
+	AVF_AQ_RC_EFBIG	= 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum avf_admin_queue_opc {
+	/* aq commands */
+	avf_aqc_opc_get_version	= 0x0001,
+	avf_aqc_opc_driver_version	= 0x0002,
+	avf_aqc_opc_queue_shutdown	= 0x0003,
+	avf_aqc_opc_set_pf_context	= 0x0004,
+
+	/* resource ownership */
+	avf_aqc_opc_request_resource	= 0x0008,
+	avf_aqc_opc_release_resource	= 0x0009,
+
+	avf_aqc_opc_list_func_capabilities	= 0x000A,
+	avf_aqc_opc_list_dev_capabilities	= 0x000B,
+
+	/* Proxy commands */
+	avf_aqc_opc_set_proxy_config		= 0x0104,
+	avf_aqc_opc_set_ns_proxy_table_entry	= 0x0105,
+
+	/* LAA */
+	avf_aqc_opc_mac_address_read	= 0x0107,
+	avf_aqc_opc_mac_address_write	= 0x0108,
+
+	/* PXE */
+	avf_aqc_opc_clear_pxe_mode	= 0x0110,
+
+	/* WoL commands */
+	avf_aqc_opc_set_wol_filter	= 0x0120,
+	avf_aqc_opc_get_wake_reason	= 0x0121,
+	avf_aqc_opc_clear_all_wol_filters = 0x025E,
+
+	/* internal switch commands */
+	avf_aqc_opc_get_switch_config		= 0x0200,
+	avf_aqc_opc_add_statistics		= 0x0201,
+	avf_aqc_opc_remove_statistics		= 0x0202,
+	avf_aqc_opc_set_port_parameters	= 0x0203,
+	avf_aqc_opc_get_switch_resource_alloc	= 0x0204,
+	avf_aqc_opc_set_switch_config		= 0x0205,
+	avf_aqc_opc_rx_ctl_reg_read		= 0x0206,
+	avf_aqc_opc_rx_ctl_reg_write		= 0x0207,
+
+	avf_aqc_opc_add_vsi			= 0x0210,
+	avf_aqc_opc_update_vsi_parameters	= 0x0211,
+	avf_aqc_opc_get_vsi_parameters		= 0x0212,
+
+	avf_aqc_opc_add_pv			= 0x0220,
+	avf_aqc_opc_update_pv_parameters	= 0x0221,
+	avf_aqc_opc_get_pv_parameters		= 0x0222,
+
+	avf_aqc_opc_add_veb			= 0x0230,
+	avf_aqc_opc_update_veb_parameters	= 0x0231,
+	avf_aqc_opc_get_veb_parameters		= 0x0232,
+
+	avf_aqc_opc_delete_element		= 0x0243,
+
+	avf_aqc_opc_add_macvlan		= 0x0250,
+	avf_aqc_opc_remove_macvlan		= 0x0251,
+	avf_aqc_opc_add_vlan			= 0x0252,
+	avf_aqc_opc_remove_vlan		= 0x0253,
+	avf_aqc_opc_set_vsi_promiscuous_modes	= 0x0254,
+	avf_aqc_opc_add_tag			= 0x0255,
+	avf_aqc_opc_remove_tag			= 0x0256,
+	avf_aqc_opc_add_multicast_etag		= 0x0257,
+	avf_aqc_opc_remove_multicast_etag	= 0x0258,
+	avf_aqc_opc_update_tag			= 0x0259,
+	avf_aqc_opc_add_control_packet_filter	= 0x025A,
+	avf_aqc_opc_remove_control_packet_filter	= 0x025B,
+	avf_aqc_opc_add_cloud_filters		= 0x025C,
+	avf_aqc_opc_remove_cloud_filters	= 0x025D,
+	avf_aqc_opc_clear_wol_switch_filters	= 0x025E,
+	avf_aqc_opc_replace_cloud_filters	= 0x025F,
+
+	avf_aqc_opc_add_mirror_rule	= 0x0260,
+	avf_aqc_opc_delete_mirror_rule	= 0x0261,
+
+	/* Dynamic Device Personalization */
+	avf_aqc_opc_write_personalization_profile	= 0x0270,
+	avf_aqc_opc_get_personalization_profile_list	= 0x0271,
+
+	/* DCB commands */
+	avf_aqc_opc_dcb_ignore_pfc	= 0x0301,
+	avf_aqc_opc_dcb_updated	= 0x0302,
+	avf_aqc_opc_set_dcb_parameters = 0x0303,
+
+	/* TX scheduler */
+	avf_aqc_opc_configure_vsi_bw_limit		= 0x0400,
+	avf_aqc_opc_configure_vsi_ets_sla_bw_limit	= 0x0406,
+	avf_aqc_opc_configure_vsi_tc_bw		= 0x0407,
+	avf_aqc_opc_query_vsi_bw_config		= 0x0408,
+	avf_aqc_opc_query_vsi_ets_sla_config		= 0x040A,
+	avf_aqc_opc_configure_switching_comp_bw_limit	= 0x0410,
+
+	avf_aqc_opc_enable_switching_comp_ets			= 0x0413,
+	avf_aqc_opc_modify_switching_comp_ets			= 0x0414,
+	avf_aqc_opc_disable_switching_comp_ets			= 0x0415,
+	avf_aqc_opc_configure_switching_comp_ets_bw_limit	= 0x0416,
+	avf_aqc_opc_configure_switching_comp_bw_config		= 0x0417,
+	avf_aqc_opc_query_switching_comp_ets_config		= 0x0418,
+	avf_aqc_opc_query_port_ets_config			= 0x0419,
+	avf_aqc_opc_query_switching_comp_bw_config		= 0x041A,
+	avf_aqc_opc_suspend_port_tx				= 0x041B,
+	avf_aqc_opc_resume_port_tx				= 0x041C,
+	avf_aqc_opc_configure_partition_bw			= 0x041D,
+	/* hmc */
+	avf_aqc_opc_query_hmc_resource_profile	= 0x0500,
+	avf_aqc_opc_set_hmc_resource_profile	= 0x0501,
+
+	/* phy commands*/
+
+	/* phy commands*/
+	avf_aqc_opc_get_phy_abilities		= 0x0600,
+	avf_aqc_opc_set_phy_config		= 0x0601,
+	avf_aqc_opc_set_mac_config		= 0x0603,
+	avf_aqc_opc_set_link_restart_an	= 0x0605,
+	avf_aqc_opc_get_link_status		= 0x0607,
+	avf_aqc_opc_set_phy_int_mask		= 0x0613,
+	avf_aqc_opc_get_local_advt_reg		= 0x0614,
+	avf_aqc_opc_set_local_advt_reg		= 0x0615,
+	avf_aqc_opc_get_partner_advt		= 0x0616,
+	avf_aqc_opc_set_lb_modes		= 0x0618,
+	avf_aqc_opc_get_phy_wol_caps		= 0x0621,
+	avf_aqc_opc_set_phy_debug		= 0x0622,
+	avf_aqc_opc_upload_ext_phy_fm		= 0x0625,
+	avf_aqc_opc_run_phy_activity		= 0x0626,
+	avf_aqc_opc_set_phy_register		= 0x0628,
+	avf_aqc_opc_get_phy_register		= 0x0629,
+
+	/* NVM commands */
+	avf_aqc_opc_nvm_read			= 0x0701,
+	avf_aqc_opc_nvm_erase			= 0x0702,
+	avf_aqc_opc_nvm_update			= 0x0703,
+	avf_aqc_opc_nvm_config_read		= 0x0704,
+	avf_aqc_opc_nvm_config_write		= 0x0705,
+	avf_aqc_opc_nvm_progress		= 0x0706,
+	avf_aqc_opc_oem_post_update		= 0x0720,
+	avf_aqc_opc_thermal_sensor		= 0x0721,
+
+	/* virtualization commands */
+	avf_aqc_opc_send_msg_to_pf		= 0x0801,
+	avf_aqc_opc_send_msg_to_vf		= 0x0802,
+	avf_aqc_opc_send_msg_to_peer		= 0x0803,
+
+	/* alternate structure */
+	avf_aqc_opc_alternate_write		= 0x0900,
+	avf_aqc_opc_alternate_write_indirect	= 0x0901,
+	avf_aqc_opc_alternate_read		= 0x0902,
+	avf_aqc_opc_alternate_read_indirect	= 0x0903,
+	avf_aqc_opc_alternate_write_done	= 0x0904,
+	avf_aqc_opc_alternate_set_mode		= 0x0905,
+	avf_aqc_opc_alternate_clear_port	= 0x0906,
+
+	/* LLDP commands */
+	avf_aqc_opc_lldp_get_mib	= 0x0A00,
+	avf_aqc_opc_lldp_update_mib	= 0x0A01,
+	avf_aqc_opc_lldp_add_tlv	= 0x0A02,
+	avf_aqc_opc_lldp_update_tlv	= 0x0A03,
+	avf_aqc_opc_lldp_delete_tlv	= 0x0A04,
+	avf_aqc_opc_lldp_stop		= 0x0A05,
+	avf_aqc_opc_lldp_start		= 0x0A06,
+	avf_aqc_opc_get_cee_dcb_cfg	= 0x0A07,
+	avf_aqc_opc_lldp_set_local_mib	= 0x0A08,
+	avf_aqc_opc_lldp_stop_start_spec_agent	= 0x0A09,
+
+	/* Tunnel commands */
+	avf_aqc_opc_add_udp_tunnel	= 0x0B00,
+	avf_aqc_opc_del_udp_tunnel	= 0x0B01,
+	avf_aqc_opc_set_rss_key	= 0x0B02,
+	avf_aqc_opc_set_rss_lut	= 0x0B03,
+	avf_aqc_opc_get_rss_key	= 0x0B04,
+	avf_aqc_opc_get_rss_lut	= 0x0B05,
+
+	/* Async Events */
+	avf_aqc_opc_event_lan_overflow		= 0x1001,
+
+	/* OEM commands */
+	avf_aqc_opc_oem_parameter_change	= 0xFE00,
+	avf_aqc_opc_oem_device_status_change	= 0xFE01,
+	avf_aqc_opc_oem_ocsd_initialize	= 0xFE02,
+	avf_aqc_opc_oem_ocbb_initialize	= 0xFE03,
+
+	/* debug commands */
+	avf_aqc_opc_debug_read_reg		= 0xFF03,
+	avf_aqc_opc_debug_write_reg		= 0xFF04,
+	avf_aqc_opc_debug_modify_reg		= 0xFF07,
+	avf_aqc_opc_debug_dump_internals	= 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \
+	{ avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define AVF_CHECK_CMD_LENGTH(X)	AVF_CHECK_STRUCT_LEN(16, X)
+
+/* internal (0x00XX) commands */
+
+/* Get version (direct 0x0001) */
+struct avf_aqc_get_version {
+	__le32 rom_ver;
+	__le32 fw_build;
+	__le16 fw_major;
+	__le16 fw_minor;
+	__le16 api_major;
+	__le16 api_minor;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_version);
+
+/* Send driver version (indirect 0x0002) */
+struct avf_aqc_driver_version {
+	u8	driver_major_ver;
+	u8	driver_minor_ver;
+	u8	driver_build_ver;
+	u8	driver_subbuild_ver;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version);
+
+/* Queue Shutdown (direct 0x0003) */
+struct avf_aqc_queue_shutdown {
+	__le32	driver_unloading;
+#define AVF_AQ_DRIVER_UNLOADING	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown);
+
+/* Set PF context (0x0004, direct) */
+struct avf_aqc_set_pf_context {
+	u8	pf_id;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context);
+
+/* Request resource ownership (direct 0x0008)
+ * Release resource ownership (direct 0x0009)
+ */
+#define AVF_AQ_RESOURCE_NVM			1
+#define AVF_AQ_RESOURCE_SDP			2
+#define AVF_AQ_RESOURCE_ACCESS_READ		1
+#define AVF_AQ_RESOURCE_ACCESS_WRITE		2
+#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT	3000
+#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT	180000
+
+struct avf_aqc_request_resource {
+	__le16	resource_id;
+	__le16	access_type;
+	__le32	timeout;
+	__le32	resource_number;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource);
+
+/* Get function capabilities (indirect 0x000A)
+ * Get device capabilities (indirect 0x000B)
+ */
+struct avf_aqc_list_capabilites {
+	u8 command_flags;
+#define AVF_AQ_LIST_CAP_PF_INDEX_EN	1
+	u8 pf_index;
+	u8 reserved[2];
+	__le32 count;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites);
+
+struct avf_aqc_list_capabilities_element_resp {
+	__le16	id;
+	u8	major_rev;
+	u8	minor_rev;
+	__le32	number;
+	__le32	logical_id;
+	__le32	phys_id;
+	u8	reserved[16];
+};
+
+/* list of caps */
+
+#define AVF_AQ_CAP_ID_SWITCH_MODE	0x0001
+#define AVF_AQ_CAP_ID_MNG_MODE		0x0002
+#define AVF_AQ_CAP_ID_NPAR_ACTIVE	0x0003
+#define AVF_AQ_CAP_ID_OS2BMC_CAP	0x0004
+#define AVF_AQ_CAP_ID_FUNCTIONS_VALID	0x0005
+#define AVF_AQ_CAP_ID_ALTERNATE_RAM	0x0006
+#define AVF_AQ_CAP_ID_WOL_AND_PROXY	0x0008
+#define AVF_AQ_CAP_ID_SRIOV		0x0012
+#define AVF_AQ_CAP_ID_VF		0x0013
+#define AVF_AQ_CAP_ID_VMDQ		0x0014
+#define AVF_AQ_CAP_ID_8021QBG		0x0015
+#define AVF_AQ_CAP_ID_8021QBR		0x0016
+#define AVF_AQ_CAP_ID_VSI		0x0017
+#define AVF_AQ_CAP_ID_DCB		0x0018
+#define AVF_AQ_CAP_ID_FCOE		0x0021
+#define AVF_AQ_CAP_ID_ISCSI		0x0022
+#define AVF_AQ_CAP_ID_RSS		0x0040
+#define AVF_AQ_CAP_ID_RXQ		0x0041
+#define AVF_AQ_CAP_ID_TXQ		0x0042
+#define AVF_AQ_CAP_ID_MSIX		0x0043
+#define AVF_AQ_CAP_ID_VF_MSIX		0x0044
+#define AVF_AQ_CAP_ID_FLOW_DIRECTOR	0x0045
+#define AVF_AQ_CAP_ID_1588		0x0046
+#define AVF_AQ_CAP_ID_IWARP		0x0051
+#define AVF_AQ_CAP_ID_LED		0x0061
+#define AVF_AQ_CAP_ID_SDP		0x0062
+#define AVF_AQ_CAP_ID_MDIO		0x0063
+#define AVF_AQ_CAP_ID_WSR_PROT		0x0064
+#define AVF_AQ_CAP_ID_NVM_MGMT		0x0080
+#define AVF_AQ_CAP_ID_FLEX10		0x00F1
+#define AVF_AQ_CAP_ID_CEM		0x00F2
+
+/* Set CPPM Configuration (direct 0x0103) */
+struct avf_aqc_cppm_configuration {
+	__le16	command_flags;
+#define AVF_AQ_CPPM_EN_LTRC	0x0800
+#define AVF_AQ_CPPM_EN_DMCTH	0x1000
+#define AVF_AQ_CPPM_EN_DMCTLX	0x2000
+#define AVF_AQ_CPPM_EN_HPTC	0x4000
+#define AVF_AQ_CPPM_EN_DMARC	0x8000
+	__le16	ttlx;
+	__le32	dmacr;
+	__le16	dmcth;
+	u8	hptc;
+	u8	reserved;
+	__le32	pfltrc;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration);
+
+/* Set ARP Proxy command / response (indirect 0x0104) */
+struct avf_aqc_arp_proxy_data {
+	__le16	command_flags;
+#define AVF_AQ_ARP_INIT_IPV4	0x0800
+#define AVF_AQ_ARP_UNSUP_CTL	0x1000
+#define AVF_AQ_ARP_ENA		0x2000
+#define AVF_AQ_ARP_ADD_IPV4	0x4000
+#define AVF_AQ_ARP_DEL_IPV4	0x8000
+	__le16	table_id;
+	__le32	enabled_offloads;
+#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE	0x00000020
+#define AVF_AQ_ARP_OFFLOAD_ENABLE		0x00000800
+	__le32	ip_addr;
+	u8	mac_addr[6];
+	u8	reserved[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data);
+
+/* Set NS Proxy Table Entry Command (indirect 0x0105) */
+struct avf_aqc_ns_proxy_data {
+	__le16	table_idx_mac_addr_0;
+	__le16	table_idx_mac_addr_1;
+	__le16	table_idx_ipv6_0;
+	__le16	table_idx_ipv6_1;
+	__le16	control;
+#define AVF_AQ_NS_PROXY_ADD_0		0x0001
+#define AVF_AQ_NS_PROXY_DEL_0		0x0002
+#define AVF_AQ_NS_PROXY_ADD_1		0x0004
+#define AVF_AQ_NS_PROXY_DEL_1		0x0008
+#define AVF_AQ_NS_PROXY_ADD_IPV6_0	0x0010
+#define AVF_AQ_NS_PROXY_DEL_IPV6_0	0x0020
+#define AVF_AQ_NS_PROXY_ADD_IPV6_1	0x0040
+#define AVF_AQ_NS_PROXY_DEL_IPV6_1	0x0080
+#define AVF_AQ_NS_PROXY_COMMAND_SEQ	0x0100
+#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL	0x0200
+#define AVF_AQ_NS_PROXY_INIT_MAC_TBL	0x0400
+#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE	0x0800
+#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE	0x1000
+	u8	mac_addr_0[6];
+	u8	mac_addr_1[6];
+	u8	local_mac_addr[6];
+	u8	ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */
+	u8	ipv6_addr_1[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data);
+
+/* Manage LAA Command (0x0106) - obsolete */
+struct avf_aqc_mng_laa {
+	__le16	command_flags;
+#define AVF_AQ_LAA_FLAG_WR	0x8000
+	u8	reserved[2];
+	__le32	sal;
+	__le16	sah;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa);
+
+/* Manage MAC Address Read Command (indirect 0x0107) */
+struct avf_aqc_mac_address_read {
+	__le16	command_flags;
+#define AVF_AQC_LAN_ADDR_VALID		0x10
+#define AVF_AQC_SAN_ADDR_VALID		0x20
+#define AVF_AQC_PORT_ADDR_VALID	0x40
+#define AVF_AQC_WOL_ADDR_VALID		0x80
+#define AVF_AQC_MC_MAG_EN_VALID	0x100
+#define AVF_AQC_WOL_PRESERVE_STATUS	0x200
+#define AVF_AQC_ADDR_VALID_MASK	0x3F0
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read);
+
+struct avf_aqc_mac_address_read_data {
+	u8 pf_lan_mac[6];
+	u8 pf_san_mac[6];
+	u8 port_mac[6];
+	u8 pf_wol_mac[6];
+};
+
+AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data);
+
+/* Manage MAC Address Write Command (0x0108) */
+struct avf_aqc_mac_address_write {
+	__le16	command_flags;
+#define AVF_AQC_MC_MAG_EN		0x0100
+#define AVF_AQC_WOL_PRESERVE_ON_PFR	0x0200
+#define AVF_AQC_WRITE_TYPE_LAA_ONLY	0x0000
+#define AVF_AQC_WRITE_TYPE_LAA_WOL	0x4000
+#define AVF_AQC_WRITE_TYPE_PORT	0x8000
+#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG	0xC000
+#define AVF_AQC_WRITE_TYPE_MASK	0xC000
+
+	__le16	mac_sah;
+	__le32	mac_sal;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write);
+
+/* PXE commands (0x011x) */
+
+/* Clear PXE Command and response  (direct 0x0110) */
+struct avf_aqc_clear_pxe {
+	u8	rx_cnt;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe);
+
+/* Set WoL Filter (0x0120) */
+
+struct avf_aqc_set_wol_filter {
+	__le16 filter_index;
+#define AVF_AQC_MAX_NUM_WOL_FILTERS	8
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT	15
+#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK	(0x1 << \
+		AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT)
+
+#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT		0
+#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK	(0x7 << \
+		AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT)
+	__le16 cmd_flags;
+#define AVF_AQC_SET_WOL_FILTER				0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL		0x4000
+#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR	0x2000
+#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR		0
+#define AVF_AQC_SET_WOL_FILTER_ACTION_SET		1
+	__le16 valid_flags;
+#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID		0x8000
+#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID	0x4000
+	u8 reserved[2];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter);
+
+struct avf_aqc_set_wol_filter_data {
+	u8 filter[128];
+	u8 mask[16];
+};
+
+AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data);
+
+/* Get Wake Reason (0x0121) */
+
+struct avf_aqc_get_wake_reason_completion {
+	u8 reserved_1[2];
+	__le16 wake_reason;
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT	0
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT)
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT	8
+#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK	(0xFF << \
+		AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT)
+	u8 reserved_2[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion);
+
+/* Switch configuration commands (0x02xx) */
+
+/* Used by many indirect commands that only pass an seid and a buffer in the
+ * command
+ */
+struct avf_aqc_switch_seid {
+	__le16	seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid);
+
+/* Get Switch Configuration command (indirect 0x0200)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_switch_config_header_resp {
+	__le16	num_reported;
+	__le16	num_total;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp);
+
+struct avf_aqc_switch_config_element_resp {
+	u8	element_type;
+#define AVF_AQ_SW_ELEM_TYPE_MAC	1
+#define AVF_AQ_SW_ELEM_TYPE_PF		2
+#define AVF_AQ_SW_ELEM_TYPE_VF		3
+#define AVF_AQ_SW_ELEM_TYPE_EMP	4
+#define AVF_AQ_SW_ELEM_TYPE_BMC	5
+#define AVF_AQ_SW_ELEM_TYPE_PV		16
+#define AVF_AQ_SW_ELEM_TYPE_VEB	17
+#define AVF_AQ_SW_ELEM_TYPE_PA		18
+#define AVF_AQ_SW_ELEM_TYPE_VSI	19
+	u8	revision;
+#define AVF_AQ_SW_ELEM_REV_1		1
+	__le16	seid;
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	u8	reserved[3];
+	u8	connection_type;
+#define AVF_AQ_CONN_TYPE_REGULAR	0x1
+#define AVF_AQ_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_CONN_TYPE_CASCADED	0x3
+	__le16	scheduler_id;
+	__le16	element_info;
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp);
+
+/* Get Switch Configuration (indirect 0x0200)
+ *    an array of elements are returned in the response buffer
+ *    the first in the array is the header, remainder are elements
+ */
+struct avf_aqc_get_switch_config_resp {
+	struct avf_aqc_get_switch_config_header_resp	header;
+	struct avf_aqc_switch_config_element_resp	element[1];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp);
+
+/* Add Statistics (direct 0x0201)
+ * Remove Statistics (direct 0x0202)
+ */
+struct avf_aqc_add_remove_statistics {
+	__le16	seid;
+	__le16	vlan;
+	__le16	stat_index;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics);
+
+/* Set Port Parameters command (direct 0x0203) */
+struct avf_aqc_set_port_parameters {
+	__le16	command_flags;
+#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS	1
+#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS	2 /* must set! */
+#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA	4
+	__le16	bad_frame_vsi;
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT	0x0
+#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK	0x3FF
+	__le16	default_seid;        /* reserved for command */
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters);
+
+/* Get Switch Resource Allocation (indirect 0x0204) */
+struct avf_aqc_get_switch_resource_alloc {
+	u8	num_entries;         /* reserved for command */
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc);
+
+/* expect an array of these structs in the response buffer */
+struct avf_aqc_switch_resource_alloc_element_resp {
+	u8	resource_type;
+#define AVF_AQ_RESOURCE_TYPE_VEB		0x0
+#define AVF_AQ_RESOURCE_TYPE_VSI		0x1
+#define AVF_AQ_RESOURCE_TYPE_MACADDR		0x2
+#define AVF_AQ_RESOURCE_TYPE_STAG		0x3
+#define AVF_AQ_RESOURCE_TYPE_ETAG		0x4
+#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH	0x5
+#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH	0x6
+#define AVF_AQ_RESOURCE_TYPE_VLAN		0x7
+#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY	0x8
+#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY	0x9
+#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL	0xA
+#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE	0xB
+#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS	0xC
+#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS	0xD
+#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS	0xF
+#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS	0x10
+#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS	0x11
+#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS		0x12
+#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS	0x13
+	u8	reserved1;
+	__le16	guaranteed;
+	__le16	total;
+	__le16	used;
+	__le16	total_unalloced;
+	u8	reserved2[6];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp);
+
+/* Set Switch Configuration (direct 0x0205) */
+struct avf_aqc_set_switch_config {
+	__le16	flags;
+/* flags used for both fields below */
+#define AVF_AQ_SET_SWITCH_CFG_PROMISC		0x0001
+#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER	0x0002
+#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT	0x0004
+	__le16	valid_flags;
+	/* The ethertype in switch_tag is dropped on ingress and used
+	 * internally by the switch. Set this to zero for the default
+	 * of 0x88a8 (802.1ad). Should be zero for firmware API
+	 * versions lower than 1.7.
+	 */
+	__le16	switch_tag;
+	/* The ethertypes in first_tag and second_tag are used to
+	 * match the outer and inner VLAN tags (respectively) when HW
+	 * double VLAN tagging is enabled via the set port parameters
+	 * AQ command. Otherwise these are both ignored. Set them to
+	 * zero for their defaults of 0x8100 (802.1Q). Should be zero
+	 * for firmware API versions lower than 1.7.
+	 */
+	__le16	first_tag;
+	__le16	second_tag;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config);
+
+/* Read Receive control registers  (direct 0x0206)
+ * Write Receive control registers (direct 0x0207)
+ *     used for accessing Rx control registers that can be
+ *     slow and need special handling when under high Rx load
+ */
+struct avf_aqc_rx_ctl_reg_read_write {
+	__le32 reserved1;
+	__le32 address;
+	__le32 reserved2;
+	__le32 value;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write);
+
+/* Add VSI (indirect 0x0210)
+ *    this indirect command uses struct avf_aqc_vsi_properties_data
+ *    as the indirect buffer (128 bytes)
+ *
+ * Update VSI (indirect 0x211)
+ *     uses the same data structure as Add VSI
+ *
+ * Get VSI (indirect 0x0212)
+ *     uses the same completion and data structure as Add VSI
+ */
+struct avf_aqc_add_get_update_vsi {
+	__le16	uplink_seid;
+	u8	connection_type;
+#define AVF_AQ_VSI_CONN_TYPE_NORMAL	0x1
+#define AVF_AQ_VSI_CONN_TYPE_DEFAULT	0x2
+#define AVF_AQ_VSI_CONN_TYPE_CASCADED	0x3
+	u8	reserved1;
+	u8	vf_id;
+	u8	reserved2;
+	__le16	vsi_flags;
+#define AVF_AQ_VSI_TYPE_SHIFT		0x0
+#define AVF_AQ_VSI_TYPE_MASK		(0x3 << AVF_AQ_VSI_TYPE_SHIFT)
+#define AVF_AQ_VSI_TYPE_VF		0x0
+#define AVF_AQ_VSI_TYPE_VMDQ2		0x1
+#define AVF_AQ_VSI_TYPE_PF		0x2
+#define AVF_AQ_VSI_TYPE_EMP_MNG	0x3
+#define AVF_AQ_VSI_FLAG_CASCADED_PV	0x4
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi);
+
+struct avf_aqc_add_get_update_vsi_completion {
+	__le16 seid;
+	__le16 vsi_number;
+	__le16 vsi_used;
+	__le16 vsi_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion);
+
+struct avf_aqc_vsi_properties_data {
+	/* first 96 byte are written by SW */
+	__le16	valid_sections;
+#define AVF_AQ_VSI_PROP_SWITCH_VALID		0x0001
+#define AVF_AQ_VSI_PROP_SECURITY_VALID		0x0002
+#define AVF_AQ_VSI_PROP_VLAN_VALID		0x0004
+#define AVF_AQ_VSI_PROP_CAS_PV_VALID		0x0008
+#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID	0x0010
+#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID	0x0020
+#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID	0x0040
+#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID	0x0080
+#define AVF_AQ_VSI_PROP_OUTER_UP_VALID		0x0100
+#define AVF_AQ_VSI_PROP_SCHED_VALID		0x0200
+	/* switch section */
+	__le16	switch_id; /* 12bit id combined with flags below */
+#define AVF_AQ_VSI_SW_ID_SHIFT		0x0000
+#define AVF_AQ_VSI_SW_ID_MASK		(0xFFF << AVF_AQ_VSI_SW_ID_SHIFT)
+#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG	0x1000
+#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB	0x2000
+#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB	0x4000
+	u8	sw_reserved[2];
+	/* security section */
+	u8	sec_flags;
+#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD	0x01
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK	0x02
+#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK	0x04
+	u8	sec_reserved;
+	/* VLAN section */
+	__le16	pvid; /* VLANS include priority bits */
+	__le16	fcoe_pvid;
+	u8	port_vlan_flags;
+#define AVF_AQ_VSI_PVLAN_MODE_SHIFT	0x00
+#define AVF_AQ_VSI_PVLAN_MODE_MASK	(0x03 << \
+					 AVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define AVF_AQ_VSI_PVLAN_MODE_TAGGED	0x01
+#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED	0x02
+#define AVF_AQ_VSI_PVLAN_MODE_ALL	0x03
+#define AVF_AQ_VSI_PVLAN_INSERT_PVID	0x04
+#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT	0x03
+#define AVF_AQ_VSI_PVLAN_EMOD_MASK	(0x3 << \
+					 AVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH	0x0
+#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP	0x08
+#define AVF_AQ_VSI_PVLAN_EMOD_STR	0x10
+#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING	0x18
+	u8	pvlan_reserved[3];
+	/* ingress egress up sections */
+	__le32	ingress_table; /* bitmap, 3 bits per up */
+#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT	0
+#define AVF_AQ_VSI_UP_TABLE_UP0_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT	3
+#define AVF_AQ_VSI_UP_TABLE_UP1_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT	6
+#define AVF_AQ_VSI_UP_TABLE_UP2_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT	9
+#define AVF_AQ_VSI_UP_TABLE_UP3_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT	12
+#define AVF_AQ_VSI_UP_TABLE_UP4_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT	15
+#define AVF_AQ_VSI_UP_TABLE_UP5_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT	18
+#define AVF_AQ_VSI_UP_TABLE_UP6_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT	21
+#define AVF_AQ_VSI_UP_TABLE_UP7_MASK	(0x7 << \
+					 AVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+	__le32	egress_table;   /* same defines as for ingress table */
+	/* cascaded PV section */
+	__le16	cas_pv_tag;
+	u8	cas_pv_flags;
+#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_MASK		(0x03 << \
+						 AVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE		0x00
+#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE		0x01
+#define AVF_AQ_VSI_CAS_PV_TAGX_COPY		0x02
+#define AVF_AQ_VSI_CAS_PV_INSERT_TAG		0x10
+#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE		0x20
+#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG	0x40
+	u8	cas_pv_reserved;
+	/* queue mapping section */
+	__le16	mapping_flags;
+#define AVF_AQ_VSI_QUE_MAP_CONTIG	0x0
+#define AVF_AQ_VSI_QUE_MAP_NONCONTIG	0x1
+	__le16	queue_mapping[16];
+#define AVF_AQ_VSI_QUEUE_SHIFT		0x0
+#define AVF_AQ_VSI_QUEUE_MASK		(0x7FF << AVF_AQ_VSI_QUEUE_SHIFT)
+	__le16	tc_mapping[8];
+#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT	0
+#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK	(0x1FF << \
+					 AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT	9
+#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK	(0x7 << \
+					 AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+	/* queueing option section */
+	u8	queueing_opt_flags;
+#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA	0x04
+#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA	0x08
+#define AVF_AQ_VSI_QUE_OPT_TCP_ENA	0x10
+#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA	0x20
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF	0x00
+#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI	0x40
+	u8	queueing_opt_reserved[3];
+	/* scheduler section */
+	u8	up_enable_bits;
+	u8	sched_reserved;
+	/* outer up section */
+	__le32	outer_up_table; /* same structure and defines as ingress tbl */
+	u8	cmd_reserved[8];
+	/* last 32 bytes are written by FW */
+	__le16	qs_handle[8];
+#define AVF_AQ_VSI_QS_HANDLE_INVALID	0xFFFF
+	__le16	stat_counter_idx;
+	__le16	sched_id;
+	u8	resp_reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data);
+
+/* Add Port Virtualizer (direct 0x0220)
+ * also used for update PV (direct 0x0221) but only flags are used
+ * (IS_CTRL_PORT only works on add PV)
+ */
+struct avf_aqc_add_update_pv {
+	__le16	command_flags;
+#define AVF_AQC_PV_FLAG_PV_TYPE		0x1
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN	0x2
+#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN	0x4
+#define AVF_AQC_PV_FLAG_IS_CTRL_PORT		0x8
+	__le16	uplink_seid;
+	__le16	connected_seid;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv);
+
+struct avf_aqc_add_update_pv_completion {
+	/* reserved for update; for add also encodes error if rc == ENOSPC */
+	__le16	pv_seid;
+#define AVF_AQC_PV_ERR_FLAG_NO_PV	0x1
+#define AVF_AQC_PV_ERR_FLAG_NO_SCHED	0x2
+#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY	0x8
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion);
+
+/* Get PV Params (direct 0x0222)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+
+struct avf_aqc_get_pv_params_completion {
+	__le16	seid;
+	__le16	default_stag;
+	__le16	pv_flags; /* same flags as add_pv */
+#define AVF_AQC_GET_PV_PV_TYPE			0x1
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG	0x2
+#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG	0x4
+	u8	reserved[8];
+	__le16	default_port_seid;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion);
+
+/* Add VEB (direct 0x0230) */
+struct avf_aqc_add_veb {
+	__le16	uplink_seid;
+	__le16	downlink_seid;
+	__le16	veb_flags;
+#define AVF_AQC_ADD_VEB_FLOATING		0x1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT	1
+#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK		(0x3 << \
+					AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT)
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT	0x2
+#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA		0x4
+#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER	0x8     /* deprecated */
+#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS	0x10
+	u8	enable_tcs;
+	u8	reserved[9];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb);
+
+struct avf_aqc_add_veb_completion {
+	u8	reserved[6];
+	__le16	switch_seid;
+	/* also encodes error if rc == ENOSPC; codes are the same as add_pv */
+	__le16	veb_seid;
+#define AVF_AQC_VEB_ERR_FLAG_NO_VEB		0x1
+#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED		0x2
+#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER	0x4
+#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY		0x8
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses avf_aqc_switch_seid for the descriptor
+ */
+struct avf_aqc_get_veb_parameters_completion {
+	__le16	seid;
+	__le16	switch_id;
+	__le16	veb_flags; /* only the first/last flags from 0x0230 is valid */
+	__le16	statistic_index;
+	__le16	vebs_used;
+	__le16	vebs_free;
+	u8	reserved[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion);
+
+/* Delete Element (direct 0x0243)
+ * uses the generic avf_aqc_switch_seid
+ */
+
+/* Add MAC-VLAN (indirect 0x0250) */
+
+/* used for the command for most vlan commands */
+struct avf_aqc_macvlan {
+	__le16	num_addresses;
+	__le16	seid[3];
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+#define AVF_AQC_MACVLAN_CMD_SEID_VALID		0x8000
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan);
+
+/* indirect data for command and response */
+struct avf_aqc_add_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	__le16	flags;
+#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH	0x0001
+#define AVF_AQC_MACVLAN_ADD_HASH_MATCH		0x0002
+#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN	0x0004
+#define AVF_AQC_MACVLAN_ADD_TO_QUEUE		0x0008
+#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC	0x0010
+	__le16	queue_number;
+#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT	0
+#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK		(0x7FF << \
+					AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT)
+	/* response section */
+	u8	match_method;
+#define AVF_AQC_MM_PERFECT_MATCH	0x01
+#define AVF_AQC_MM_HASH_MATCH		0x02
+#define AVF_AQC_MM_ERR_NO_RES		0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_macvlan_completion {
+	__le16 perfect_mac_used;
+	__le16 perfect_mac_free;
+	__le16 unicast_hash_free;
+	__le16 multicast_hash_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion);
+
+/* Remove MAC-VLAN (indirect 0x0251)
+ * uses avf_aqc_macvlan for the descriptor
+ * data points to an array of num_addresses of elements
+ */
+
+struct avf_aqc_remove_macvlan_element_data {
+	u8	mac_addr[6];
+	__le16	vlan_tag;
+	u8	flags;
+#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH	0x01
+#define AVF_AQC_MACVLAN_DEL_HASH_MATCH		0x02
+#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN	0x08
+#define AVF_AQC_MACVLAN_DEL_ALL_VSIS		0x10
+	u8	reserved[3];
+	/* reply section */
+	u8	error_code;
+#define AVF_AQC_REMOVE_MACVLAN_SUCCESS		0x0
+#define AVF_AQC_REMOVE_MACVLAN_FAIL		0xFF
+	u8	reply_reserved[3];
+};
+
+/* Add VLAN (indirect 0x0252)
+ * Remove VLAN (indirect 0x0253)
+ * use the generic avf_aqc_macvlan for the command
+ */
+struct avf_aqc_add_remove_vlan_element_data {
+	__le16	vlan_tag;
+	u8	vlan_flags;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_LOCAL			0x1
+#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT		1
+#define AVF_AQC_ADD_PVLAN_TYPE_MASK	(0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT)
+#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR		0x0
+#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY		0x2
+#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY	0x4
+#define AVF_AQC_VLAN_PTYPE_SHIFT		3
+#define AVF_AQC_VLAN_PTYPE_MASK	(0x3 << AVF_AQC_VLAN_PTYPE_SHIFT)
+#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI		0x0
+#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI		0x8
+#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI	0x10
+#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI	0x18
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_ALL	0x1
+	u8	reserved;
+	u8	result;
+/* flags for add VLAN */
+#define AVF_AQC_ADD_VLAN_SUCCESS	0x0
+#define AVF_AQC_ADD_VLAN_FAIL_REQUEST	0xFE
+#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE	0xFF
+/* flags for remove VLAN */
+#define AVF_AQC_REMOVE_VLAN_SUCCESS	0x0
+#define AVF_AQC_REMOVE_VLAN_FAIL	0xFF
+	u8	reserved1[3];
+};
+
+struct avf_aqc_add_remove_vlan_completion {
+	u8	reserved[4];
+	__le16	vlans_used;
+	__le16	vlans_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+/* Set VSI Promiscuous Modes (direct 0x0254) */
+struct avf_aqc_set_vsi_promiscuous_modes {
+	__le16	promiscuous_flags;
+	__le16	valid_flags;
+/* flags used for both fields above */
+#define AVF_AQC_SET_VSI_PROMISC_UNICAST	0x01
+#define AVF_AQC_SET_VSI_PROMISC_MULTICAST	0x02
+#define AVF_AQC_SET_VSI_PROMISC_BROADCAST	0x04
+#define AVF_AQC_SET_VSI_DEFAULT		0x08
+#define AVF_AQC_SET_VSI_PROMISC_VLAN		0x10
+#define AVF_AQC_SET_VSI_PROMISC_TX		0x8000
+	__le16	seid;
+#define AVF_AQC_VSI_PROM_CMD_SEID_MASK		0x3FF
+	__le16	vlan_tag;
+#define AVF_AQC_SET_VSI_VLAN_MASK		0x0FFF
+#define AVF_AQC_SET_VSI_VLAN_VALID		0x8000
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes);
+
+/* Add S/E-tag command (direct 0x0255)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_add_tag {
+	__le16	flags;
+#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE		0x0001
+	__le16	seid;
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	__le16	queue_number;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag);
+
+struct avf_aqc_add_remove_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion);
+
+/* Remove S/E-tag command (direct 0x0256)
+ * Uses generic avf_aqc_add_remove_tag_completion for completion
+ */
+struct avf_aqc_remove_tag {
+	__le16	seid;
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	tag;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag);
+
+/* Add multicast E-Tag (direct 0x0257)
+ * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
+ * and no external data
+ */
+struct avf_aqc_add_remove_mcast_etag {
+	__le16	pv_seid;
+	__le16	etag;
+	u8	num_unicast_etags;
+	u8	reserved[3];
+	__le32	addr_high;          /* address of array of 2-byte s-tags */
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag);
+
+struct avf_aqc_add_remove_mcast_etag_completion {
+	u8	reserved[4];
+	__le16	mcast_etags_used;
+	__le16	mcast_etags_free;
+	__le32	addr_high;
+	__le32	addr_low;
+
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion);
+
+/* Update S/E-Tag (direct 0x0259) */
+struct avf_aqc_update_tag {
+	__le16	seid;
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT)
+	__le16	old_tag;
+	__le16	new_tag;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag);
+
+struct avf_aqc_update_tag_completion {
+	u8	reserved[12];
+	__le16	tags_used;
+	__le16	tags_free;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion);
+
+/* Add Control Packet filter (direct 0x025A)
+ * Remove Control Packet filter (direct 0x025B)
+ * uses the avf_aqc_add_oveb_cloud,
+ * and the generic direct completion structure
+ */
+struct avf_aqc_add_remove_control_packet_filter {
+	u8	mac[6];
+	__le16	etype;
+	__le16	flags;
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC	0x0001
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP		0x0002
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE	0x0004
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX		0x0008
+#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX		0x0000
+	__le16	seid;
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK	(0x3FF << \
+				AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT)
+	__le16	queue;
+	u8	reserved[2];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter);
+
+struct avf_aqc_add_remove_control_packet_filter_completion {
+	__le16	mac_etype_used;
+	__le16	etype_used;
+	__le16	mac_etype_free;
+	__le16	etype_free;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion);
+
+/* Add Cloud filters (indirect 0x025C)
+ * Remove Cloud filters (indirect 0x025D)
+ * uses the avf_aqc_add_remove_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_aqc_add_remove_cloud_filters {
+	u8	num_filters;
+	u8	reserved;
+	__le16	seid;
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT	0
+#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK	(0x3FF << \
+					AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT)
+	u8	big_buffer_flag;
+#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER	1
+	u8	reserved2[3];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters);
+
+struct avf_aqc_add_remove_cloud_filters_element_data {
+	u8	outer_mac[6];
+	u8	inner_mac[6];
+	__le16	inner_vlan;
+	union {
+		struct {
+			u8 reserved[12];
+			u8 data[4];
+		} v4;
+		struct {
+			u8 data[16];
+		} v6;
+	} ipaddr;
+	__le16	flags;
+#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT			0
+#define AVF_AQC_ADD_CLOUD_FILTER_MASK	(0x3F << \
+					AVF_AQC_ADD_CLOUD_FILTER_SHIFT)
+/* 0x0000 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OIP			0x0001
+/* 0x0002 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN		0x0003
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID	0x0004
+/* 0x0005 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID		0x0006
+/* 0x0007 reserved */
+/* 0x0008 reserved */
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC			0x0009
+#define AVF_AQC_ADD_CLOUD_FILTER_IMAC			0x000A
+#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC	0x000B
+#define AVF_AQC_ADD_CLOUD_FILTER_IIP			0x000C
+/* 0x0010 to 0x0017 is for custom filters */
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE		0x0080
+#define AVF_AQC_ADD_CLOUD_VNK_SHIFT			6
+#define AVF_AQC_ADD_CLOUD_VNK_MASK			0x00C0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4			0
+#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6			0x0100
+
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT		9
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK		0x1E00
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN		0
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC		1
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE		2
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP			3
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED		4
+#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE		5
+
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC	0x2000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC	0x4000
+#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP	0x8000
+
+	__le32	tenant_id;
+	u8	reserved[4];
+	__le16	queue_number;
+#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT		0
+#define AVF_AQC_ADD_CLOUD_QUEUE_MASK		(0x7FF << \
+						 AVF_AQC_ADD_CLOUD_QUEUE_SHIFT)
+	u8	reserved2[14];
+	/* response section */
+	u8	allocation_result;
+#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS	0x0
+#define AVF_AQC_ADD_CLOUD_FILTER_FAIL		0xFF
+	u8	response_reserved[7];
+};
+
+/* avf_aqc_add_rm_cloud_filt_elem_ext is used when
+ * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. refer to
+ * DCR288
+ */
+struct avf_aqc_add_rm_cloud_filt_elem_ext {
+	struct avf_aqc_add_remove_cloud_filters_element_data element;
+	u16     general_fields[32];
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0	0
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1	1
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2	2
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0	3
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1	4
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2	5
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0	6
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1	7
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2	8
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0	9
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1	10
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2	11
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0	12
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1	13
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2	14
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0	15
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1	16
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2	17
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3	18
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4	19
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5	20
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6	21
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7	22
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0	23
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1	24
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2	25
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3	26
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4	27
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5	28
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6	29
+#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7	30
+};
+
+struct avf_aqc_remove_cloud_filters_completion {
+	__le16 perfect_ovlan_used;
+	__le16 perfect_ovlan_free;
+	__le16 vlan_used;
+	__le16 vlan_free;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion);
+
+/* Replace filter Command 0x025F
+ * uses the avf_aqc_replace_cloud_filters,
+ * and the generic indirect completion structure
+ */
+struct avf_filter_data {
+	u8 filter_type;
+	u8 input[3];
+};
+
+struct avf_aqc_replace_cloud_filters_cmd {
+	u8	valid_flags;
+#define AVF_AQC_REPLACE_L1_FILTER		0x0
+#define AVF_AQC_REPLACE_CLOUD_FILTER		0x1
+#define AVF_AQC_GET_CLOUD_FILTERS		0x2
+#define AVF_AQC_MIRROR_CLOUD_FILTER		0x4
+#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER	0x8
+	u8	old_filter_type;
+	u8	new_filter_type;
+	u8	tr_bit;
+	u8	reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_replace_cloud_filters_cmd_buf {
+	u8	data[32];
+/* Filter type INPUT codes*/
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX	3
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED	(1 << 7UL)
+
+/* Field Vector offsets */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA		0
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH		6
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG		7
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN		8
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN		9
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN		10
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY		11
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC		12
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA		14
+/* big FLU */
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA		15
+
+#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN		37
+	struct avf_filter_data	filters[8];
+};
+
+/* Add Mirror Rule (indirect or direct 0x0260)
+ * Delete Mirror Rule (indirect or direct 0x0261)
+ * note: some rule types (4,5) do not use an external buffer.
+ *       take care to set the flags correctly.
+ */
+struct avf_aqc_add_delete_mirror_rule {
+	__le16 seid;
+	__le16 rule_type;
+#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT		0
+#define AVF_AQC_MIRROR_RULE_TYPE_MASK		(0x7 << \
+						AVF_AQC_MIRROR_RULE_TYPE_SHIFT)
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS	1
+#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS	2
+#define AVF_AQC_MIRROR_RULE_TYPE_VLAN		3
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS	4
+#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS	5
+	__le16 num_entries;
+	__le16 destination;  /* VSI for add, rule id for delete */
+	__le32 addr_high;    /* address of array of 2-byte VSI or VLAN ids */
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule);
+
+struct avf_aqc_add_delete_mirror_rule_completion {
+	u8	reserved[2];
+	__le16	rule_id;  /* only used on add */
+	__le16	mirror_rules_used;
+	__le16	mirror_rules_free;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion);
+
+/* Dynamic Device Personalization */
+struct avf_aqc_write_personalization_profile {
+	u8      flags;
+	u8      reserved[3];
+	__le32  profile_track_id;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile);
+
+struct avf_aqc_write_ddp_resp {
+	__le32 error_offset;
+	__le32 error_info;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct avf_aqc_get_applied_profiles {
+	u8      flags;
+#define AVF_AQC_GET_DDP_GET_CONF	0x1
+#define AVF_AQC_GET_DDP_GET_RDPU_CONF	0x2
+	u8      rsv[3];
+	__le32  reserved;
+	__le32  addr_high;
+	__le32  addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles);
+
+/* DCB 0x03xx*/
+
+/* PFC Ignore (direct 0x0301)
+ *    the command and response use the same descriptor structure
+ */
+struct avf_aqc_pfc_ignore {
+	u8	tc_bitmap;
+	u8	command_flags; /* unused on response */
+#define AVF_AQC_PFC_IGNORE_SET		0x80
+#define AVF_AQC_PFC_IGNORE_CLEAR	0x0
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore);
+
+/* DCB Update (direct 0x0302) uses the avf_aq_desc structure
+ * with no parameters
+ */
+
+/* TX scheduler 0x04xx */
+
+/* Almost all the indirect commands use
+ * this generic struct to pass the SEID in param0
+ */
+struct avf_aqc_tx_sched_ind {
+	__le16	vsi_seid;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind);
+
+/* Several commands respond with a set of queue set handles */
+struct avf_aqc_qs_handles_resp {
+	__le16 qs_handles[8];
+};
+
+/* Configure VSI BW limits (direct 0x0400) */
+struct avf_aqc_configure_vsi_bw_limit {
+	__le16	vsi_seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_credit; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit);
+
+/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_ets_sla_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credits[8]; /* FW writesback QS handles here */
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data);
+
+/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
+ *    responds with avf_aqc_qs_handles_resp
+ */
+struct avf_aqc_configure_vsi_tc_bw_data {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	tc_bw_credits[8];
+	u8	reserved1[4];
+	__le16	qs_handles[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data);
+
+/* Query vsi bw configuration (indirect 0x0408) */
+struct avf_aqc_query_vsi_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	tc_suspended_bits;
+	u8	reserved[14];
+	__le16	qs_handles[8];
+	u8	reserved1[4];
+	__le16	port_bw_limit;
+	u8	reserved2[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved3[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp);
+
+/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
+struct avf_aqc_query_vsi_ets_sla_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[3];
+	u8	share_credits[8];
+	__le16	credits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp);
+
+/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
+struct avf_aqc_configure_switching_comp_bw_limit {
+	__le16	seid;
+	u8	reserved[2];
+	__le16	credit;
+	u8	reserved1[2];
+	u8	max_bw; /* 0-3, limit = 2^max */
+	u8	reserved2[7];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit);
+
+/* Enable  Physical Port ETS (indirect 0x0413)
+ * Modify  Physical Port ETS (indirect 0x0414)
+ * Disable Physical Port ETS (indirect 0x0415)
+ */
+struct avf_aqc_configure_switching_comp_ets_data {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	seepage;
+#define AVF_AQ_ETS_SEEPAGE_EN_MASK	0x1
+	u8	tc_strict_priority_flags;
+	u8	reserved1[17];
+	u8	tc_bw_share_credits[8];
+	u8	reserved2[96];
+};
+
+AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data);
+
+/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
+struct avf_aqc_configure_switching_comp_ets_bw_limit_data {
+	u8	tc_valid_bits;
+	u8	reserved[15];
+	__le16	tc_bw_credit[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved1[28];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40,
+		      avf_aqc_configure_switching_comp_ets_bw_limit_data);
+
+/* Configure Switching Component Bandwidth Allocation per Tc
+ * (indirect 0x0417)
+ */
+struct avf_aqc_configure_switching_comp_bw_config_data {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits; /* bool */
+	u8	tc_bw_share_credits[8];
+	u8	reserved1[20];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data);
+
+/* Query Switching Component Configuration (indirect 0x0418) */
+struct avf_aqc_query_switching_comp_ets_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[35];
+	__le16	port_bw_limit;
+	u8	reserved1[2];
+	u8	tc_bw_max; /* 0-3, limit = 2^max */
+	u8	reserved2[23];
+};
+
+AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp);
+
+/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
+struct avf_aqc_query_port_ets_config_resp {
+	u8	reserved[4];
+	u8	tc_valid_bits;
+	u8	reserved1;
+	u8	tc_strict_priority_bits;
+	u8	reserved2;
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+	u8	reserved3[32];
+};
+
+AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp);
+
+/* Query Switching Component Bandwidth Allocation per Traffic Type
+ * (indirect 0x041A)
+ */
+struct avf_aqc_query_switching_comp_bw_config_resp {
+	u8	tc_valid_bits;
+	u8	reserved[2];
+	u8	absolute_credits_enable; /* bool */
+	u8	tc_bw_share_credits[8];
+	__le16	tc_bw_limits[8];
+
+	/* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */
+	__le16	tc_bw_max[2];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp);
+
+/* Suspend/resume port TX traffic
+ * (direct 0x041B and 0x041C) uses the generic SEID struct
+ */
+
+/* Configure partition BW
+ * (indirect 0x041D)
+ */
+struct avf_aqc_configure_partition_bw_data {
+	__le16	pf_valid_bits;
+	u8	min_bw[16];      /* guaranteed bandwidth */
+	u8	max_bw[16];      /* bandwidth limit */
+};
+
+AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data);
+
+/* Get and set the active HMC resource profile and status.
+ * (direct 0x0500) and (direct 0x0501)
+ */
+struct avf_aq_get_set_hmc_resource_profile {
+	u8	pm_profile;
+	u8	pe_vf_enabled;
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile);
+
+enum avf_aq_hmc_profile {
+	/* AVF_HMC_PROFILE_NO_CHANGE	= 0, reserved */
+	AVF_HMC_PROFILE_DEFAULT	= 1,
+	AVF_HMC_PROFILE_FAVOR_VF	= 2,
+	AVF_HMC_PROFILE_EQUAL		= 3,
+};
+
+/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
+
+/* set in param0 for get phy abilities to report qualified modules */
+#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES	0x0001
+#define AVF_AQ_PHY_REPORT_INITIAL_VALUES	0x0002
+
+enum avf_aq_phy_type {
+	AVF_PHY_TYPE_SGMII			= 0x0,
+	AVF_PHY_TYPE_1000BASE_KX		= 0x1,
+	AVF_PHY_TYPE_10GBASE_KX4		= 0x2,
+	AVF_PHY_TYPE_10GBASE_KR		= 0x3,
+	AVF_PHY_TYPE_40GBASE_KR4		= 0x4,
+	AVF_PHY_TYPE_XAUI			= 0x5,
+	AVF_PHY_TYPE_XFI			= 0x6,
+	AVF_PHY_TYPE_SFI			= 0x7,
+	AVF_PHY_TYPE_XLAUI			= 0x8,
+	AVF_PHY_TYPE_XLPPI			= 0x9,
+	AVF_PHY_TYPE_40GBASE_CR4_CU		= 0xA,
+	AVF_PHY_TYPE_10GBASE_CR1_CU		= 0xB,
+	AVF_PHY_TYPE_10GBASE_AOC		= 0xC,
+	AVF_PHY_TYPE_40GBASE_AOC		= 0xD,
+	AVF_PHY_TYPE_UNRECOGNIZED		= 0xE,
+	AVF_PHY_TYPE_UNSUPPORTED		= 0xF,
+	AVF_PHY_TYPE_100BASE_TX		= 0x11,
+	AVF_PHY_TYPE_1000BASE_T		= 0x12,
+	AVF_PHY_TYPE_10GBASE_T			= 0x13,
+	AVF_PHY_TYPE_10GBASE_SR		= 0x14,
+	AVF_PHY_TYPE_10GBASE_LR		= 0x15,
+	AVF_PHY_TYPE_10GBASE_SFPP_CU		= 0x16,
+	AVF_PHY_TYPE_10GBASE_CR1		= 0x17,
+	AVF_PHY_TYPE_40GBASE_CR4		= 0x18,
+	AVF_PHY_TYPE_40GBASE_SR4		= 0x19,
+	AVF_PHY_TYPE_40GBASE_LR4		= 0x1A,
+	AVF_PHY_TYPE_1000BASE_SX		= 0x1B,
+	AVF_PHY_TYPE_1000BASE_LX		= 0x1C,
+	AVF_PHY_TYPE_1000BASE_T_OPTICAL	= 0x1D,
+	AVF_PHY_TYPE_20GBASE_KR2		= 0x1E,
+	AVF_PHY_TYPE_25GBASE_KR		= 0x1F,
+	AVF_PHY_TYPE_25GBASE_CR		= 0x20,
+	AVF_PHY_TYPE_25GBASE_SR		= 0x21,
+	AVF_PHY_TYPE_25GBASE_LR		= 0x22,
+	AVF_PHY_TYPE_25GBASE_AOC		= 0x23,
+	AVF_PHY_TYPE_25GBASE_ACC		= 0x24,
+	AVF_PHY_TYPE_MAX,
+	AVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP	= 0xFD,
+	AVF_PHY_TYPE_EMPTY			= 0xFE,
+	AVF_PHY_TYPE_DEFAULT			= 0xFF,
+};
+
+#define AVF_LINK_SPEED_100MB_SHIFT	0x1
+#define AVF_LINK_SPEED_1000MB_SHIFT	0x2
+#define AVF_LINK_SPEED_10GB_SHIFT	0x3
+#define AVF_LINK_SPEED_40GB_SHIFT	0x4
+#define AVF_LINK_SPEED_20GB_SHIFT	0x5
+#define AVF_LINK_SPEED_25GB_SHIFT	0x6
+
+enum avf_aq_link_speed {
+	AVF_LINK_SPEED_UNKNOWN	= 0,
+	AVF_LINK_SPEED_100MB	= (1 << AVF_LINK_SPEED_100MB_SHIFT),
+	AVF_LINK_SPEED_1GB	= (1 << AVF_LINK_SPEED_1000MB_SHIFT),
+	AVF_LINK_SPEED_10GB	= (1 << AVF_LINK_SPEED_10GB_SHIFT),
+	AVF_LINK_SPEED_40GB	= (1 << AVF_LINK_SPEED_40GB_SHIFT),
+	AVF_LINK_SPEED_20GB	= (1 << AVF_LINK_SPEED_20GB_SHIFT),
+	AVF_LINK_SPEED_25GB	= (1 << AVF_LINK_SPEED_25GB_SHIFT),
+};
+
+struct avf_aqc_module_desc {
+	u8 oui[3];
+	u8 reserved1;
+	u8 part_number[16];
+	u8 revision[4];
+	u8 reserved2[8];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc);
+
+struct avf_aq_get_phy_abilities_resp {
+	__le32	phy_type;       /* bitmap using the above enum for offsets */
+	u8	link_speed;     /* bitmap using the above enum bit patterns */
+	u8	abilities;
+#define AVF_AQ_PHY_FLAG_PAUSE_TX	0x01
+#define AVF_AQ_PHY_FLAG_PAUSE_RX	0x02
+#define AVF_AQ_PHY_FLAG_LOW_POWER	0x04
+#define AVF_AQ_PHY_LINK_ENABLED	0x08
+#define AVF_AQ_PHY_AN_ENABLED		0x10
+#define AVF_AQ_PHY_FLAG_MODULE_QUAL	0x20
+#define AVF_AQ_PHY_FEC_ABILITY_KR	0x40
+#define AVF_AQ_PHY_FEC_ABILITY_RS	0x80
+	__le16	eee_capability;
+#define AVF_AQ_EEE_100BASE_TX		0x0002
+#define AVF_AQ_EEE_1000BASE_T		0x0004
+#define AVF_AQ_EEE_10GBASE_T		0x0008
+#define AVF_AQ_EEE_1000BASE_KX		0x0010
+#define AVF_AQ_EEE_10GBASE_KX4		0x0020
+#define AVF_AQ_EEE_10GBASE_KR		0x0040
+	__le32	eeer_val;
+	u8	d3_lpan;
+#define AVF_AQ_SET_PHY_D3_LPAN_ENA	0x01
+	u8	phy_type_ext;
+#define AVF_AQ_PHY_TYPE_EXT_25G_KR	0x01
+#define AVF_AQ_PHY_TYPE_EXT_25G_CR	0x02
+#define AVF_AQ_PHY_TYPE_EXT_25G_SR	0x04
+#define AVF_AQ_PHY_TYPE_EXT_25G_LR	0x08
+#define AVF_AQ_PHY_TYPE_EXT_25G_AOC	0x10
+#define AVF_AQ_PHY_TYPE_EXT_25G_ACC	0x20
+	u8	fec_cfg_curr_mod_ext_info;
+#define AVF_AQ_ENABLE_FEC_KR		0x01
+#define AVF_AQ_ENABLE_FEC_RS		0x02
+#define AVF_AQ_REQUEST_FEC_KR		0x04
+#define AVF_AQ_REQUEST_FEC_RS		0x08
+#define AVF_AQ_ENABLE_FEC_AUTO		0x10
+#define AVF_AQ_FEC
+#define AVF_AQ_MODULE_TYPE_EXT_MASK	0xE0
+#define AVF_AQ_MODULE_TYPE_EXT_SHIFT	5
+
+	u8	ext_comp_code;
+	u8	phy_id[4];
+	u8	module_type[3];
+	u8	qualified_module_count;
+#define AVF_AQ_PHY_MAX_QMS		16
+	struct avf_aqc_module_desc	qualified_module[AVF_AQ_PHY_MAX_QMS];
+};
+
+AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp);
+
+/* Set PHY Config (direct 0x0601) */
+struct avf_aq_set_phy_config { /* same bits as above in all */
+	__le32	phy_type;
+	u8	link_speed;
+	u8	abilities;
+/* bits 0-2 use the values from get_phy_abilities_resp */
+#define AVF_AQ_PHY_ENABLE_LINK		0x08
+#define AVF_AQ_PHY_ENABLE_AN		0x10
+#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK	0x20
+	__le16	eee_capability;
+	__le32	eeer;
+	u8	low_power_ctrl;
+	u8	phy_type_ext;
+	u8	fec_config;
+#define AVF_AQ_SET_FEC_ABILITY_KR	BIT(0)
+#define AVF_AQ_SET_FEC_ABILITY_RS	BIT(1)
+#define AVF_AQ_SET_FEC_REQUEST_KR	BIT(2)
+#define AVF_AQ_SET_FEC_REQUEST_RS	BIT(3)
+#define AVF_AQ_SET_FEC_AUTO		BIT(4)
+#define AVF_AQ_PHY_FEC_CONFIG_SHIFT	0x0
+#define AVF_AQ_PHY_FEC_CONFIG_MASK	(0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT)
+	u8	reserved;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config);
+
+/* Set MAC Config command data structure (direct 0x0603) */
+struct avf_aq_set_mac_config {
+	__le16	max_frame_size;
+	u8	params;
+#define AVF_AQ_SET_MAC_CONFIG_CRC_EN		0x04
+#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK	0x78
+#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT	3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE	0x0
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX	0xF
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX	0x9
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX	0x8
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX	0x7
+#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX	0x6
+#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX	0x5
+#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX	0x4
+#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX	0x3
+#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX	0x2
+#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX	0x1
+	u8	tx_timer_priority; /* bitmap */
+	__le16	tx_timer_value;
+	__le16	fc_refresh_threshold;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config);
+
+/* Restart Auto-Negotiation (direct 0x605) */
+struct avf_aqc_set_link_restart_an {
+	u8	command;
+#define AVF_AQ_PHY_RESTART_AN	0x02
+#define AVF_AQ_PHY_LINK_ENABLE	0x04
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an);
+
+/* Get Link Status cmd & response data structure (direct 0x0607) */
+struct avf_aqc_get_link_status {
+	__le16	command_flags; /* only field set on command */
+#define AVF_AQ_LSE_MASK		0x3
+#define AVF_AQ_LSE_NOP			0x0
+#define AVF_AQ_LSE_DISABLE		0x2
+#define AVF_AQ_LSE_ENABLE		0x3
+/* only response uses this flag */
+#define AVF_AQ_LSE_IS_ENABLED		0x1
+	u8	phy_type;    /* avf_aq_phy_type   */
+	u8	link_speed;  /* avf_aq_link_speed */
+	u8	link_info;
+#define AVF_AQ_LINK_UP			0x01    /* obsolete */
+#define AVF_AQ_LINK_UP_FUNCTION	0x01
+#define AVF_AQ_LINK_FAULT		0x02
+#define AVF_AQ_LINK_FAULT_TX		0x04
+#define AVF_AQ_LINK_FAULT_RX		0x08
+#define AVF_AQ_LINK_FAULT_REMOTE	0x10
+#define AVF_AQ_LINK_UP_PORT		0x20
+#define AVF_AQ_MEDIA_AVAILABLE		0x40
+#define AVF_AQ_SIGNAL_DETECT		0x80
+	u8	an_info;
+#define AVF_AQ_AN_COMPLETED		0x01
+#define AVF_AQ_LP_AN_ABILITY		0x02
+#define AVF_AQ_PD_FAULT		0x04
+#define AVF_AQ_FEC_EN			0x08
+#define AVF_AQ_PHY_LOW_POWER		0x10
+#define AVF_AQ_LINK_PAUSE_TX		0x20
+#define AVF_AQ_LINK_PAUSE_RX		0x40
+#define AVF_AQ_QUALIFIED_MODULE	0x80
+	u8	ext_info;
+#define AVF_AQ_LINK_PHY_TEMP_ALARM	0x01
+#define AVF_AQ_LINK_XCESSIVE_ERRORS	0x02
+#define AVF_AQ_LINK_TX_SHIFT		0x02
+#define AVF_AQ_LINK_TX_MASK		(0x03 << AVF_AQ_LINK_TX_SHIFT)
+#define AVF_AQ_LINK_TX_ACTIVE		0x00
+#define AVF_AQ_LINK_TX_DRAINED		0x01
+#define AVF_AQ_LINK_TX_FLUSHED		0x03
+#define AVF_AQ_LINK_FORCED_40G		0x10
+/* 25G Error Codes */
+#define AVF_AQ_25G_NO_ERR		0X00
+#define AVF_AQ_25G_NOT_PRESENT		0X01
+#define AVF_AQ_25G_NVM_CRC_ERR		0X02
+#define AVF_AQ_25G_SBUS_UCODE_ERR	0X03
+#define AVF_AQ_25G_SERDES_UCODE_ERR	0X04
+#define AVF_AQ_25G_NIMB_UCODE_ERR	0X05
+	u8	loopback; /* use defines from avf_aqc_set_lb_mode */
+/* Since firmware API 1.7 loopback field keeps power class info as well */
+#define AVF_AQ_LOOPBACK_MASK		0x07
+#define AVF_AQ_PWR_CLASS_SHIFT_LB	6
+#define AVF_AQ_PWR_CLASS_MASK_LB	(0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB)
+	__le16	max_frame_size;
+	u8	config;
+#define AVF_AQ_CONFIG_FEC_KR_ENA	0x01
+#define AVF_AQ_CONFIG_FEC_RS_ENA	0x02
+#define AVF_AQ_CONFIG_CRC_ENA		0x04
+#define AVF_AQ_CONFIG_PACING_MASK	0x78
+	union {
+		struct {
+			u8	power_desc;
+#define AVF_AQ_LINK_POWER_CLASS_1	0x00
+#define AVF_AQ_LINK_POWER_CLASS_2	0x01
+#define AVF_AQ_LINK_POWER_CLASS_3	0x02
+#define AVF_AQ_LINK_POWER_CLASS_4	0x03
+#define AVF_AQ_PWR_CLASS_MASK		0x03
+			u8	reserved[4];
+		};
+		struct {
+			u8	link_type[4];
+			u8	link_type_ext;
+		};
+	};
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status);
+
+/* Set event mask command (direct 0x613) */
+struct avf_aqc_set_phy_int_mask {
+	u8	reserved[8];
+	__le16	event_mask;
+#define AVF_AQ_EVENT_LINK_UPDOWN	0x0002
+#define AVF_AQ_EVENT_MEDIA_NA		0x0004
+#define AVF_AQ_EVENT_LINK_FAULT	0x0008
+#define AVF_AQ_EVENT_PHY_TEMP_ALARM	0x0010
+#define AVF_AQ_EVENT_EXCESSIVE_ERRORS	0x0020
+#define AVF_AQ_EVENT_SIGNAL_DETECT	0x0040
+#define AVF_AQ_EVENT_AN_COMPLETED	0x0080
+#define AVF_AQ_EVENT_MODULE_QUAL_FAIL	0x0100
+#define AVF_AQ_EVENT_PORT_TX_SUSPENDED	0x0200
+	u8	reserved1[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask);
+
+/* Get Local AN advt register (direct 0x0614)
+ * Set Local AN advt register (direct 0x0615)
+ * Get Link Partner AN advt register (direct 0x0616)
+ */
+struct avf_aqc_an_advt_reg {
+	__le32	local_an_reg0;
+	__le16	local_an_reg1;
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg);
+
+/* Set Loopback mode (0x0618) */
+struct avf_aqc_set_lb_mode {
+	u8	lb_level;
+#define AVF_AQ_LB_NONE	0
+#define AVF_AQ_LB_MAC	1
+#define AVF_AQ_LB_SERDES	2
+#define AVF_AQ_LB_PHY_INT	3
+#define AVF_AQ_LB_PHY_EXT	4
+#define AVF_AQ_LB_CPVL_PCS	5
+#define AVF_AQ_LB_CPVL_EXT	6
+#define AVF_AQ_LB_PHY_LOCAL	0x01
+#define AVF_AQ_LB_PHY_REMOTE	0x02
+#define AVF_AQ_LB_MAC_LOCAL	0x04
+	u8	lb_type;
+#define AVF_AQ_LB_LOCAL	0
+#define AVF_AQ_LB_FAR	0x01
+	u8	speed;
+#define AVF_AQ_LB_SPEED_NONE	0
+#define AVF_AQ_LB_SPEED_1G	1
+#define AVF_AQ_LB_SPEED_10G	2
+#define AVF_AQ_LB_SPEED_40G	3
+#define AVF_AQ_LB_SPEED_20G	4
+	u8	force_speed;
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode);
+
+/* Set PHY Debug command (0x0622) */
+struct avf_aqc_set_phy_debug {
+	u8	command_flags;
+#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL	0x02
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT	2
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK	(0x03 << \
+					AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT)
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE	0x00
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD	0x01
+#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT	0x02
+/* Disable link manageability on a single port */
+#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW	0x10
+/* Disable link manageability on all ports needs both bits 4 and 5 */
+#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW	0x20
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug);
+
+enum avf_aq_phy_reg_type {
+	AVF_AQC_PHY_REG_INTERNAL	= 0x1,
+	AVF_AQC_PHY_REG_EXERNAL_BASET	= 0x2,
+	AVF_AQC_PHY_REG_EXERNAL_MODULE	= 0x3
+};
+
+/* Run PHY Activity (0x0626) */
+struct avf_aqc_run_phy_activity {
+	__le16  activity_id;
+	u8      flags;
+	u8      reserved1;
+	__le32  control;
+	__le32  data;
+	u8      reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity);
+
+/* Set PHY Register command (0x0628) */
+/* Get PHY Register command (0x0629) */
+struct avf_aqc_phy_register_access {
+	u8	phy_interface;
+#define AVF_AQ_PHY_REG_ACCESS_INTERNAL	0
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL	1
+#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE	2
+	u8	dev_addres;
+	u8	reserved1[2];
+	__le32	reg_address;
+	__le32	reg_value;
+	u8	reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access);
+
+/* NVM Read command (indirect 0x0701)
+ * NVM Erase commands (direct 0x0702)
+ * NVM Update commands (indirect 0x0703)
+ */
+struct avf_aqc_nvm_update {
+	u8	command_flags;
+#define AVF_AQ_NVM_LAST_CMD			0x01
+#define AVF_AQ_NVM_FLASH_ONLY			0x80
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT	1
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_MASK	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED	0x03
+#define AVF_AQ_NVM_PRESERVATION_FLAGS_ALL	0x01
+	u8	module_pointer;
+	__le16	length;
+	__le32	offset;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update);
+
+/* NVM Config Read (indirect 0x0704) */
+struct avf_aqc_nvm_config_read {
+	__le16	cmd_flags;
+#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK	1
+#define AVF_AQ_ANVM_READ_SINGLE_FEATURE		0
+#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES		1
+	__le16	element_count;
+	__le16	element_id;	/* Feature/field ID */
+	__le16	element_id_msw;	/* MSWord of field ID */
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read);
+
+/* NVM Config Write (indirect 0x0705) */
+struct avf_aqc_nvm_config_write {
+	__le16	cmd_flags;
+	__le16	element_count;
+	u8	reserved[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write);
+
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT		1
+#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+				(1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define AVF_AQ_ANVM_FEATURE		0
+#define AVF_AQ_ANVM_IMMEDIATE_FIELD	(1 << FEATURE_OR_IMMEDIATE_SHIFT)
+struct avf_aqc_nvm_config_data_feature {
+	__le16 feature_id;
+#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY		0x01
+#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP		0x08
+#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR		0x10
+	__le16 feature_options;
+	__le16 feature_selection;
+};
+
+AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature);
+
+struct avf_aqc_nvm_config_data_immediate_field {
+	__le32 field_id;
+	__le32 field_value;
+	__le16 field_options;
+	__le16 reserved;
+};
+
+AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field);
+
+/* OEM Post Update (indirect 0x0720)
+ * no command data struct used
+ */
+struct avf_aqc_nvm_oem_post_update {
+#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA	0x01
+	u8 sel_data;
+	u8 reserved[7];
+};
+
+AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update);
+
+struct avf_aqc_nvm_oem_post_update_buffer {
+	u8 str_len;
+	u8 dev_addr;
+	__le16 eeprom_addr;
+	u8 data[36];
+};
+
+AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer);
+
+/* Thermal Sensor (indirect 0x0721)
+ *     read or set thermal sensor configs and values
+ *     takes a sensor and command specific data buffer, not detailed here
+ */
+struct avf_aqc_thermal_sensor {
+	u8 sensor_action;
+#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG	0
+#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG	1
+#define AVF_AQ_THERMAL_SENSOR_READ_TEMP	2
+	u8 reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor);
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct avf_aqc_pf_vf_message {
+	__le32	id;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message);
+
+/* Alternate structure */
+
+/* Direct write (direct 0x0900)
+ * Direct read (direct 0x0902)
+ */
+struct avf_aqc_alternate_write {
+	__le32 address0;
+	__le32 data0;
+	__le32 address1;
+	__le32 data1;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write);
+
+/* Indirect write (indirect 0x0901)
+ * Indirect read (indirect 0x0903)
+ */
+
+struct avf_aqc_alternate_ind_write {
+	__le32 address;
+	__le32 length;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write);
+
+/* Done alternate write (direct 0x0904)
+ * uses avf_aq_desc
+ */
+struct avf_aqc_alternate_write_done {
+	__le16	cmd_flags;
+#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK	1
+#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY	0
+#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI	1
+#define AVF_AQ_ALTERNATE_RESET_NEEDED		2
+	u8	reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done);
+
+/* Set OEM mode (direct 0x0905) */
+struct avf_aqc_alternate_set_mode {
+	__le32	mode;
+#define AVF_AQ_ALTERNATE_MODE_NONE	0
+#define AVF_AQ_ALTERNATE_MODE_OEM	1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode);
+
+/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */
+
+/* async events 0x10xx */
+
+/* Lan Queue Overflow Event (direct, 0x1001) */
+struct avf_aqc_lan_overflow {
+	__le32	prtdcb_rupto;
+	__le32	otx_ctl;
+	u8	reserved[8];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow);
+
+/* Get LLDP MIB (indirect 0x0A00) */
+struct avf_aqc_lldp_get_mib {
+	u8	type;
+	u8	reserved1;
+#define AVF_AQ_LLDP_MIB_TYPE_MASK		0x3
+#define AVF_AQ_LLDP_MIB_LOCAL			0x0
+#define AVF_AQ_LLDP_MIB_REMOTE			0x1
+#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE	0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK		0xC
+#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT		0x2
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE	0x0
+#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR	0x1
+#define AVF_AQ_LLDP_TX_SHIFT			0x4
+#define AVF_AQ_LLDP_TX_MASK			(0x03 << AVF_AQ_LLDP_TX_SHIFT)
+/* TX pause flags use AVF_AQ_LINK_TX_* above */
+	__le16	local_len;
+	__le16	remote_len;
+	u8	reserved2[2];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib);
+
+/* Configure LLDP MIB Change Event (direct 0x0A01)
+ * also used for the event (with type in the command field)
+ */
+struct avf_aqc_lldp_update_mib {
+	u8	command;
+#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE	0x0
+#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE	0x1
+	u8	reserved[7];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib);
+
+/* Add LLDP TLV (indirect 0x0A02)
+ * Delete LLDP TLV (indirect 0x0A04)
+ */
+struct avf_aqc_lldp_add_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved1[1];
+	__le16	len;
+	u8	reserved2[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv);
+
+/* Update LLDP TLV (indirect 0x0A03) */
+struct avf_aqc_lldp_update_tlv {
+	u8	type; /* only nearest bridge and non-TPMR from 0x0A00 */
+	u8	reserved;
+	__le16	old_len;
+	__le16	new_offset;
+	__le16	new_len;
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv);
+
+/* Stop LLDP (direct 0x0A05) */
+struct avf_aqc_lldp_stop {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_STOP		0x0
+#define AVF_AQ_LLDP_AGENT_SHUTDOWN	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop);
+
+/* Start LLDP (direct 0x0A06) */
+
+struct avf_aqc_lldp_start {
+	u8	command;
+#define AVF_AQ_LLDP_AGENT_START	0x1
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start);
+
+/* Set DCB (direct 0x0303) */
+struct avf_aqc_set_dcb_parameters {
+	u8 command;
+#define AVF_AQ_DCB_SET_AGENT	0x1
+#define AVF_DCB_VALID		0x1
+	u8 valid_flags;
+	u8 reserved[14];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters);
+
+/* Get CEE DCBX Oper Config (0x0A07)
+ * uses the generic descriptor struct
+ * returns below as indirect response
+ */
+
+#define AVF_AQC_CEE_APP_FCOE_SHIFT	0x0
+#define AVF_AQC_CEE_APP_FCOE_MASK	(0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT)
+#define AVF_AQC_CEE_APP_ISCSI_SHIFT	0x3
+#define AVF_AQC_CEE_APP_ISCSI_MASK	(0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT)
+#define AVF_AQC_CEE_APP_FIP_SHIFT	0x8
+#define AVF_AQC_CEE_APP_FIP_MASK	(0x7 << AVF_AQC_CEE_APP_FIP_SHIFT)
+
+#define AVF_AQC_CEE_PG_STATUS_SHIFT	0x0
+#define AVF_AQC_CEE_PG_STATUS_MASK	(0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT)
+#define AVF_AQC_CEE_PFC_STATUS_SHIFT	0x3
+#define AVF_AQC_CEE_PFC_STATUS_MASK	(0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT)
+#define AVF_AQC_CEE_APP_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_APP_STATUS_MASK	(0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT)
+#define AVF_AQC_CEE_FCOE_STATUS_SHIFT	0x8
+#define AVF_AQC_CEE_FCOE_STATUS_MASK	(0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT)
+#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT	0xB
+#define AVF_AQC_CEE_ISCSI_STATUS_MASK	(0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT)
+#define AVF_AQC_CEE_FIP_STATUS_SHIFT	0x10
+#define AVF_AQC_CEE_FIP_STATUS_MASK	(0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT)
+
+/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with
+ * word boundary layout issues, which the Linux compilers silently deal
+ * with by adding padding, making the actual struct larger than designed.
+ * However, the FW compiler for the NIC is less lenient and complains
+ * about the struct.  Hence, the struct defined here has an extra byte in
+ * fields reserved3 and reserved4 to directly acknowledge that padding,
+ * and the new length is used in the length check macro.
+ */
+struct avf_aqc_get_cee_dcb_cfg_v1_resp {
+	u8	reserved1;
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	reserved2;
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	u8	reserved3[2];
+	__le16	oper_app_prio;
+	u8	reserved4[2];
+	__le16	tlv_status;
+};
+
+AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp);
+
+struct avf_aqc_get_cee_dcb_cfg_resp {
+	u8	oper_num_tc;
+	u8	oper_prio_tc[4];
+	u8	oper_tc_bw[8];
+	u8	oper_pfc_en;
+	__le16	oper_app_prio;
+	__le32	tlv_status;
+	u8	reserved[12];
+};
+
+AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp);
+
+/*	Set Local LLDP MIB (indirect 0x0A08)
+ *	Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT	0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK	(1 << \
+					SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB	0x0
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT	(1)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK	(1 << \
+				SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS		0x1
+	u8	type;
+	u8	reserved0;
+	__le16	length;
+	u8	reserved1[4];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib);
+
+struct avf_aqc_lldp_set_local_mib_resp {
+#define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK      0x01
+	u8  status;
+	u8  reserved[15];
+};
+
+AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp);
+
+/*	Stop/Start LLDP Agent (direct 0x0A09)
+ *	Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct avf_aqc_lldp_stop_start_specific_agent {
+#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT	0
+#define AVF_AQC_START_SPECIFIC_AGENT_MASK \
+				(1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT)
+	u8	command;
+	u8	reserved[15];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent);
+
+/* Add Udp Tunnel command and completion (direct 0x0B00) */
+struct avf_aqc_add_udp_tunnel {
+	__le16	udp_port;
+	u8	reserved0[3];
+	u8	protocol_type;
+#define AVF_AQC_TUNNEL_TYPE_VXLAN	0x00
+#define AVF_AQC_TUNNEL_TYPE_NGE	0x01
+#define AVF_AQC_TUNNEL_TYPE_TEREDO	0x10
+#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE	0x11
+	u8	reserved1[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel);
+
+struct avf_aqc_add_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	filter_entry_index;
+	u8	multiple_pfs;
+#define AVF_AQC_SINGLE_PF		0x0
+#define AVF_AQC_MULTIPLE_PFS		0x1
+	u8	total_filters;
+	u8	reserved[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion);
+
+/* remove UDP Tunnel command (0x0B01) */
+struct avf_aqc_remove_udp_tunnel {
+	u8	reserved[2];
+	u8	index; /* 0 to 15 */
+	u8	reserved2[13];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel);
+
+struct avf_aqc_del_udp_tunnel_completion {
+	__le16	udp_port;
+	u8	index; /* 0 to 15 */
+	u8	multiple_pfs;
+	u8	total_filters_used;
+	u8	reserved1[11];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion);
+
+struct avf_aqc_get_set_rss_key {
+#define AVF_AQC_SET_RSS_KEY_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+	__le16	vsi_id;
+	u8	reserved[6];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key);
+
+struct avf_aqc_get_set_rss_key_data {
+	u8 standard_rss_key[0x28];
+	u8 extended_hash_key[0xc];
+};
+
+AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data);
+
+struct  avf_aqc_get_set_rss_lut {
+#define AVF_AQC_SET_RSS_LUT_VSI_VALID		(0x1 << 15)
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK	(0x3FF << \
+					AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+	__le16	vsi_id;
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK	(0x1 << \
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI	0
+#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF	1
+	__le16	flags;
+	u8	reserved[4];
+	__le32	addr_high;
+	__le32	addr_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut);
+
+/* tunnel key structure 0x0B10 */
+
+struct avf_aqc_tunnel_key_structure {
+	u8	key1_off;
+	u8	key2_off;
+	u8	key1_len;  /* 0 to 15 */
+	u8	key2_len;  /* 0 to 15 */
+	u8	flags;
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE	0x01
+/* response flags */
+#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS	0x01
+#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED	0x02
+#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN	0x03
+	u8	network_key_index;
+#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN		0x0
+#define AVF_AQC_NETWORK_KEY_INDEX_NGE			0x1
+#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP	0x2
+#define AVF_AQC_NETWORK_KEY_INDEX_GRE			0x3
+	u8	reserved[10];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure);
+
+/* OEM mode commands (direct 0xFE0x) */
+struct avf_aqc_oem_param_change {
+	__le32	param_type;
+#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL	0
+#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL	1
+#define AVF_AQ_OEM_PARAM_MAC		2
+	__le32	param_value1;
+	__le16	param_value2;
+	u8	reserved[6];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change);
+
+struct avf_aqc_oem_state_change {
+	__le32	state;
+#define AVF_AQ_OEM_STATE_LINK_DOWN	0x0
+#define AVF_AQ_OEM_STATE_LINK_UP	0x1
+	u8	reserved[12];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change);
+
+/* Initialize OCSD (0xFE02, direct) */
+struct avf_aqc_opc_oem_ocsd_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocsd_memory_block_addr_high;
+	__le32 ocsd_memory_block_addr_low;
+	__le32 requested_update_interval;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB  (0xFE03, direct) */
+struct avf_aqc_opc_oem_ocbb_initialize {
+	u8 type_status;
+	u8 reserved1[3];
+	__le32 ocbb_memory_block_addr_high;
+	__le32 ocbb_memory_block_addr_low;
+	u8 reserved2[4];
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize);
+
+/* debug commands */
+
+/* get device id (0xFF00) uses the generic structure */
+
+/* set test more (0xFF01, internal) */
+
+struct avf_acq_set_test_mode {
+	u8	mode;
+#define AVF_AQ_TEST_PARTIAL	0
+#define AVF_AQ_TEST_FULL	1
+#define AVF_AQ_TEST_NVM	2
+	u8	reserved[3];
+	u8	command;
+#define AVF_AQ_TEST_OPEN	0
+#define AVF_AQ_TEST_CLOSE	1
+#define AVF_AQ_TEST_INC	2
+	u8	reserved2[3];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode);
+
+/* Debug Read Register command (0xFF03)
+ * Debug Write Register command (0xFF04)
+ */
+struct avf_aqc_debug_reg_read_write {
+	__le32 reserved;
+	__le32 address;
+	__le32 value_high;
+	__le32 value_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write);
+
+/* Scatter/gather Reg Read  (indirect 0xFF05)
+ * Scatter/gather Reg Write (indirect 0xFF06)
+ */
+
+/* avf_aq_desc is used for the command */
+struct avf_aqc_debug_reg_sg_element_data {
+	__le32 address;
+	__le32 value;
+};
+
+/* Debug Modify register (direct 0xFF07) */
+struct avf_aqc_debug_modify_reg {
+	__le32 address;
+	__le32 value;
+	__le32 clear_mask;
+	__le32 set_mask;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg);
+
+/* dump internal data (0xFF08, indirect) */
+
+#define AVF_AQ_CLUSTER_ID_AUX		0
+#define AVF_AQ_CLUSTER_ID_SWITCH_FLU	1
+#define AVF_AQ_CLUSTER_ID_TXSCHED	2
+#define AVF_AQ_CLUSTER_ID_HMC		3
+#define AVF_AQ_CLUSTER_ID_MAC0		4
+#define AVF_AQ_CLUSTER_ID_MAC1		5
+#define AVF_AQ_CLUSTER_ID_MAC2		6
+#define AVF_AQ_CLUSTER_ID_MAC3		7
+#define AVF_AQ_CLUSTER_ID_DCB		8
+#define AVF_AQ_CLUSTER_ID_EMP_MEM	9
+#define AVF_AQ_CLUSTER_ID_PKT_BUF	10
+#define AVF_AQ_CLUSTER_ID_ALTRAM	11
+
+struct avf_aqc_debug_dump_internals {
+	u8	cluster_id;
+	u8	table_id;
+	__le16	data_size;
+	__le32	idx;
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals);
+
+struct avf_aqc_debug_modify_internals {
+	u8	cluster_id;
+	u8	cluster_specific_params[7];
+	__le32	address_high;
+	__le32	address_low;
+};
+
+AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals);
+
+#endif /* _AVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/avf/base/avf_alloc.h b/drivers/net/avf/base/avf_alloc.h
new file mode 100644
index 0000000..21e29bd
--- /dev/null
+++ b/drivers/net/avf/base/avf_alloc.h
@@ -0,0 +1,65 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_ALLOC_H_
+#define _AVF_ALLOC_H_
+
+struct avf_hw;
+
+/* Memory allocation types */
+enum avf_memory_type {
+	avf_mem_arq_buf = 0,		/* ARQ indirect command buffer */
+	avf_mem_asq_buf = 1,
+	avf_mem_atq_buf = 2,		/* ATQ indirect command buffer */
+	avf_mem_arq_ring = 3,		/* ARQ descriptor ring */
+	avf_mem_atq_ring = 4,		/* ATQ descriptor ring */
+	avf_mem_pd = 5,		/* Page Descriptor */
+	avf_mem_bp = 6,		/* Backing Page - 4KB */
+	avf_mem_bp_jumbo = 7,		/* Backing Page - > 4KB */
+	avf_mem_reserved
+};
+
+/* prototype for functions used for dynamic memory allocation */
+enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw,
+					    struct avf_dma_mem *mem,
+					    enum avf_memory_type type,
+					    u64 size, u32 alignment);
+enum avf_status_code avf_free_dma_mem(struct avf_hw *hw,
+					struct avf_dma_mem *mem);
+enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw,
+					     struct avf_virt_mem *mem,
+					     u32 size);
+enum avf_status_code avf_free_virt_mem(struct avf_hw *hw,
+					 struct avf_virt_mem *mem);
+
+#endif /* _AVF_ALLOC_H_ */
diff --git a/drivers/net/avf/base/avf_common.c b/drivers/net/avf/base/avf_common.c
new file mode 100644
index 0000000..bbaadad
--- /dev/null
+++ b/drivers/net/avf/base/avf_common.c
@@ -0,0 +1,1845 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#include "avf_type.h"
+#include "avf_adminq.h"
+#include "avf_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * avf_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_set_mac_type\n");
+
+	if (hw->vendor_id == AVF_INTEL_VENDOR_ID) {
+		switch (hw->device_id) {
+	/* TODO: remove undefined device ID now, need to think how to
+	 * remove them in share code
+	 */
+		case AVF_DEV_ID_ADAPTIVE_VF:
+			hw->mac.type = AVF_MAC_VF;
+			break;
+		default:
+			hw->mac.type = AVF_MAC_GENERIC;
+			break;
+		}
+	} else {
+		status = AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, status);
+	return status;
+}
+
+/**
+ * avf_aq_str - convert AQ err code to a string
+ * @hw: pointer to the HW structure
+ * @aq_err: the AQ error code to convert
+ **/
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err)
+{
+	switch (aq_err) {
+	case AVF_AQ_RC_OK:
+		return "OK";
+	case AVF_AQ_RC_EPERM:
+		return "AVF_AQ_RC_EPERM";
+	case AVF_AQ_RC_ENOENT:
+		return "AVF_AQ_RC_ENOENT";
+	case AVF_AQ_RC_ESRCH:
+		return "AVF_AQ_RC_ESRCH";
+	case AVF_AQ_RC_EINTR:
+		return "AVF_AQ_RC_EINTR";
+	case AVF_AQ_RC_EIO:
+		return "AVF_AQ_RC_EIO";
+	case AVF_AQ_RC_ENXIO:
+		return "AVF_AQ_RC_ENXIO";
+	case AVF_AQ_RC_E2BIG:
+		return "AVF_AQ_RC_E2BIG";
+	case AVF_AQ_RC_EAGAIN:
+		return "AVF_AQ_RC_EAGAIN";
+	case AVF_AQ_RC_ENOMEM:
+		return "AVF_AQ_RC_ENOMEM";
+	case AVF_AQ_RC_EACCES:
+		return "AVF_AQ_RC_EACCES";
+	case AVF_AQ_RC_EFAULT:
+		return "AVF_AQ_RC_EFAULT";
+	case AVF_AQ_RC_EBUSY:
+		return "AVF_AQ_RC_EBUSY";
+	case AVF_AQ_RC_EEXIST:
+		return "AVF_AQ_RC_EEXIST";
+	case AVF_AQ_RC_EINVAL:
+		return "AVF_AQ_RC_EINVAL";
+	case AVF_AQ_RC_ENOTTY:
+		return "AVF_AQ_RC_ENOTTY";
+	case AVF_AQ_RC_ENOSPC:
+		return "AVF_AQ_RC_ENOSPC";
+	case AVF_AQ_RC_ENOSYS:
+		return "AVF_AQ_RC_ENOSYS";
+	case AVF_AQ_RC_ERANGE:
+		return "AVF_AQ_RC_ERANGE";
+	case AVF_AQ_RC_EFLUSHED:
+		return "AVF_AQ_RC_EFLUSHED";
+	case AVF_AQ_RC_BAD_ADDR:
+		return "AVF_AQ_RC_BAD_ADDR";
+	case AVF_AQ_RC_EMODE:
+		return "AVF_AQ_RC_EMODE";
+	case AVF_AQ_RC_EFBIG:
+		return "AVF_AQ_RC_EFBIG";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_stat_str - convert status err code to a string
+ * @hw: pointer to the HW structure
+ * @stat_err: the status error code to convert
+ **/
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err)
+{
+	switch (stat_err) {
+	case AVF_SUCCESS:
+		return "OK";
+	case AVF_ERR_NVM:
+		return "AVF_ERR_NVM";
+	case AVF_ERR_NVM_CHECKSUM:
+		return "AVF_ERR_NVM_CHECKSUM";
+	case AVF_ERR_PHY:
+		return "AVF_ERR_PHY";
+	case AVF_ERR_CONFIG:
+		return "AVF_ERR_CONFIG";
+	case AVF_ERR_PARAM:
+		return "AVF_ERR_PARAM";
+	case AVF_ERR_MAC_TYPE:
+		return "AVF_ERR_MAC_TYPE";
+	case AVF_ERR_UNKNOWN_PHY:
+		return "AVF_ERR_UNKNOWN_PHY";
+	case AVF_ERR_LINK_SETUP:
+		return "AVF_ERR_LINK_SETUP";
+	case AVF_ERR_ADAPTER_STOPPED:
+		return "AVF_ERR_ADAPTER_STOPPED";
+	case AVF_ERR_INVALID_MAC_ADDR:
+		return "AVF_ERR_INVALID_MAC_ADDR";
+	case AVF_ERR_DEVICE_NOT_SUPPORTED:
+		return "AVF_ERR_DEVICE_NOT_SUPPORTED";
+	case AVF_ERR_MASTER_REQUESTS_PENDING:
+		return "AVF_ERR_MASTER_REQUESTS_PENDING";
+	case AVF_ERR_INVALID_LINK_SETTINGS:
+		return "AVF_ERR_INVALID_LINK_SETTINGS";
+	case AVF_ERR_AUTONEG_NOT_COMPLETE:
+		return "AVF_ERR_AUTONEG_NOT_COMPLETE";
+	case AVF_ERR_RESET_FAILED:
+		return "AVF_ERR_RESET_FAILED";
+	case AVF_ERR_SWFW_SYNC:
+		return "AVF_ERR_SWFW_SYNC";
+	case AVF_ERR_NO_AVAILABLE_VSI:
+		return "AVF_ERR_NO_AVAILABLE_VSI";
+	case AVF_ERR_NO_MEMORY:
+		return "AVF_ERR_NO_MEMORY";
+	case AVF_ERR_BAD_PTR:
+		return "AVF_ERR_BAD_PTR";
+	case AVF_ERR_RING_FULL:
+		return "AVF_ERR_RING_FULL";
+	case AVF_ERR_INVALID_PD_ID:
+		return "AVF_ERR_INVALID_PD_ID";
+	case AVF_ERR_INVALID_QP_ID:
+		return "AVF_ERR_INVALID_QP_ID";
+	case AVF_ERR_INVALID_CQ_ID:
+		return "AVF_ERR_INVALID_CQ_ID";
+	case AVF_ERR_INVALID_CEQ_ID:
+		return "AVF_ERR_INVALID_CEQ_ID";
+	case AVF_ERR_INVALID_AEQ_ID:
+		return "AVF_ERR_INVALID_AEQ_ID";
+	case AVF_ERR_INVALID_SIZE:
+		return "AVF_ERR_INVALID_SIZE";
+	case AVF_ERR_INVALID_ARP_INDEX:
+		return "AVF_ERR_INVALID_ARP_INDEX";
+	case AVF_ERR_INVALID_FPM_FUNC_ID:
+		return "AVF_ERR_INVALID_FPM_FUNC_ID";
+	case AVF_ERR_QP_INVALID_MSG_SIZE:
+		return "AVF_ERR_QP_INVALID_MSG_SIZE";
+	case AVF_ERR_QP_TOOMANY_WRS_POSTED:
+		return "AVF_ERR_QP_TOOMANY_WRS_POSTED";
+	case AVF_ERR_INVALID_FRAG_COUNT:
+		return "AVF_ERR_INVALID_FRAG_COUNT";
+	case AVF_ERR_QUEUE_EMPTY:
+		return "AVF_ERR_QUEUE_EMPTY";
+	case AVF_ERR_INVALID_ALIGNMENT:
+		return "AVF_ERR_INVALID_ALIGNMENT";
+	case AVF_ERR_FLUSHED_QUEUE:
+		return "AVF_ERR_FLUSHED_QUEUE";
+	case AVF_ERR_INVALID_PUSH_PAGE_INDEX:
+		return "AVF_ERR_INVALID_PUSH_PAGE_INDEX";
+	case AVF_ERR_INVALID_IMM_DATA_SIZE:
+		return "AVF_ERR_INVALID_IMM_DATA_SIZE";
+	case AVF_ERR_TIMEOUT:
+		return "AVF_ERR_TIMEOUT";
+	case AVF_ERR_OPCODE_MISMATCH:
+		return "AVF_ERR_OPCODE_MISMATCH";
+	case AVF_ERR_CQP_COMPL_ERROR:
+		return "AVF_ERR_CQP_COMPL_ERROR";
+	case AVF_ERR_INVALID_VF_ID:
+		return "AVF_ERR_INVALID_VF_ID";
+	case AVF_ERR_INVALID_HMCFN_ID:
+		return "AVF_ERR_INVALID_HMCFN_ID";
+	case AVF_ERR_BACKING_PAGE_ERROR:
+		return "AVF_ERR_BACKING_PAGE_ERROR";
+	case AVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+		return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+	case AVF_ERR_INVALID_PBLE_INDEX:
+		return "AVF_ERR_INVALID_PBLE_INDEX";
+	case AVF_ERR_INVALID_SD_INDEX:
+		return "AVF_ERR_INVALID_SD_INDEX";
+	case AVF_ERR_INVALID_PAGE_DESC_INDEX:
+		return "AVF_ERR_INVALID_PAGE_DESC_INDEX";
+	case AVF_ERR_INVALID_SD_TYPE:
+		return "AVF_ERR_INVALID_SD_TYPE";
+	case AVF_ERR_MEMCPY_FAILED:
+		return "AVF_ERR_MEMCPY_FAILED";
+	case AVF_ERR_INVALID_HMC_OBJ_INDEX:
+		return "AVF_ERR_INVALID_HMC_OBJ_INDEX";
+	case AVF_ERR_INVALID_HMC_OBJ_COUNT:
+		return "AVF_ERR_INVALID_HMC_OBJ_COUNT";
+	case AVF_ERR_INVALID_SRQ_ARM_LIMIT:
+		return "AVF_ERR_INVALID_SRQ_ARM_LIMIT";
+	case AVF_ERR_SRQ_ENABLED:
+		return "AVF_ERR_SRQ_ENABLED";
+	case AVF_ERR_ADMIN_QUEUE_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_ERROR";
+	case AVF_ERR_ADMIN_QUEUE_TIMEOUT:
+		return "AVF_ERR_ADMIN_QUEUE_TIMEOUT";
+	case AVF_ERR_BUF_TOO_SHORT:
+		return "AVF_ERR_BUF_TOO_SHORT";
+	case AVF_ERR_ADMIN_QUEUE_FULL:
+		return "AVF_ERR_ADMIN_QUEUE_FULL";
+	case AVF_ERR_ADMIN_QUEUE_NO_WORK:
+		return "AVF_ERR_ADMIN_QUEUE_NO_WORK";
+	case AVF_ERR_BAD_IWARP_CQE:
+		return "AVF_ERR_BAD_IWARP_CQE";
+	case AVF_ERR_NVM_BLANK_MODE:
+		return "AVF_ERR_NVM_BLANK_MODE";
+	case AVF_ERR_NOT_IMPLEMENTED:
+		return "AVF_ERR_NOT_IMPLEMENTED";
+	case AVF_ERR_PE_DOORBELL_NOT_ENABLED:
+		return "AVF_ERR_PE_DOORBELL_NOT_ENABLED";
+	case AVF_ERR_DIAG_TEST_FAILED:
+		return "AVF_ERR_DIAG_TEST_FAILED";
+	case AVF_ERR_NOT_READY:
+		return "AVF_ERR_NOT_READY";
+	case AVF_NOT_SUPPORTED:
+		return "AVF_NOT_SUPPORTED";
+	case AVF_ERR_FIRMWARE_API_VERSION:
+		return "AVF_ERR_FIRMWARE_API_VERSION";
+	case AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
+		return "AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
+	}
+
+	snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
+	return hw->err_str;
+}
+
+/**
+ * avf_debug_aq
+ * @hw: debug mask related to admin queue
+ * @mask: debug mask
+ * @desc: pointer to admin queue descriptor
+ * @buffer: pointer to command buffer
+ * @buf_len: max length of buffer
+ *
+ * Dumps debug log about adminq command with descriptor contents.
+ **/
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc,
+		   void *buffer, u16 buf_len)
+{
+	struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc;
+	u8 *buf = (u8 *)buffer;
+	u16 len;
+	u16 i = 0;
+
+	if ((!(mask & hw->debug_mask)) || (desc == NULL))
+		return;
+
+	len = LE16_TO_CPU(aq_desc->datalen);
+
+	avf_debug(hw, mask,
+		   "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		   LE16_TO_CPU(aq_desc->opcode),
+		   LE16_TO_CPU(aq_desc->flags),
+		   LE16_TO_CPU(aq_desc->datalen),
+		   LE16_TO_CPU(aq_desc->retval));
+	avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->cookie_high),
+		   LE32_TO_CPU(aq_desc->cookie_low));
+	avf_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.internal.param0),
+		   LE32_TO_CPU(aq_desc->params.internal.param1));
+	avf_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
+		   LE32_TO_CPU(aq_desc->params.external.addr_high),
+		   LE32_TO_CPU(aq_desc->params.external.addr_low));
+
+	if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+		avf_debug(hw, mask, "AQ CMD Buffer:\n");
+		if (buf_len < len)
+			len = buf_len;
+		/* write the full 16-byte chunks */
+		for (i = 0; i < (len - 16); i += 16)
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i, buf[i], buf[i+1], buf[i+2], buf[i+3],
+				   buf[i+4], buf[i+5], buf[i+6], buf[i+7],
+				   buf[i+8], buf[i+9], buf[i+10], buf[i+11],
+				   buf[i+12], buf[i+13], buf[i+14], buf[i+15]);
+		/* the most we could have left is 16 bytes, pad with zeros */
+		if (i < len) {
+			char d_buf[16];
+			int j, i_sav;
+
+			i_sav = i;
+			memset(d_buf, 0, sizeof(d_buf));
+			for (j = 0; i < len; j++, i++)
+				d_buf[j] = buf[i];
+			avf_debug(hw, mask,
+				   "\t0x%04X  %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+				   i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3],
+				   d_buf[4], d_buf[5], d_buf[6], d_buf[7],
+				   d_buf[8], d_buf[9], d_buf[10], d_buf[11],
+				   d_buf[12], d_buf[13], d_buf[14], d_buf[15]);
+		}
+	}
+}
+
+/**
+ * avf_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ **/
+bool avf_check_asq_alive(struct avf_hw *hw)
+{
+	if (hw->aq.asq.len)
+#ifdef INTEGRATED_VF
+		if (avf_is_vf(hw))
+			return !!(rd32(hw, hw->aq.asq.len) &
+				AVF_ATQLEN1_ATQENABLE_MASK);
+#else
+		return !!(rd32(hw, hw->aq.asq.len) &
+			AVF_ATQLEN1_ATQENABLE_MASK);
+#endif /* INTEGRATED_VF */
+	return false;
+}
+
+/**
+ * avf_aq_queue_shutdown
+ * @hw: pointer to the hw struct
+ * @unloading: is the driver unloading itself
+ *
+ * Tell the Firmware that we're shutting down the AdminQ and whether
+ * or not the driver is unloading as well.
+ **/
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw,
+					     bool unloading)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_queue_shutdown *cmd =
+		(struct avf_aqc_queue_shutdown *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_queue_shutdown);
+
+	if (unloading)
+		cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING);
+	status = avf_asq_send_command(hw, &desc, NULL, 0, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw,
+						     u16 vsi_id, bool pf_lut,
+						     u8 *lut, u16 lut_size,
+						     bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_lut *cmd_resp =
+		   (struct avf_aqc_get_set_rss_lut *)&desc.params.raw;
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_lut);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_lut);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID);
+
+	if (pf_lut)
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+	else
+		cmd_resp->flags |= CPU_TO_LE16((u16)
+					((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+					AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+
+	status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
+				       false);
+}
+
+/**
+ * avf_aq_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ **/
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id,
+					  bool pf_lut, u8 *lut, u16 lut_size)
+{
+	return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * avf_aq_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ **/
+STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key,
+				      bool set)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_set_rss_key *cmd_resp =
+			(struct avf_aqc_get_set_rss_key *)&desc.params.raw;
+	u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data);
+
+	if (set)
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_set_rss_key);
+	else
+		avf_fill_default_direct_cmd_desc(&desc,
+						  avf_aqc_opc_get_rss_key);
+
+	/* Indirect command */
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd_resp->vsi_id =
+			CPU_TO_LE16((u16)((vsi_id <<
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+					  AVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+	cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID);
+
+	status = avf_asq_send_command(hw, &desc, key, key_size, NULL);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ **/
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * avf_aq_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ **/
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				      u16 vsi_id,
+				      struct avf_aqc_get_set_rss_key_data *key)
+{
+	return avf_aq_get_set_rss_key(hw, vsi_id, key, true);
+}
+
+/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT avf_ptype_lookup[ptype].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum avf_rx_l2_ptype to decode the packet type
+ * ENDIF
+ */
+
+/* macro to make the table lines short */
+#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
+	{	PTYPE, \
+		1, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP, \
+		AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
+		AVF_RX_PTYPE_##OUTER_FRAG, \
+		AVF_RX_PTYPE_TUNNEL_##T, \
+		AVF_RX_PTYPE_TUNNEL_END_##TE, \
+		AVF_RX_PTYPE_##TEF, \
+		AVF_RX_PTYPE_INNER_PROT_##I, \
+		AVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
+
+#define AVF_PTT_UNUSED_ENTRY(PTYPE) \
+		{ PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+/* shorter macros makes the table fit but are terse */
+#define AVF_RX_PTYPE_NOF		AVF_RX_PTYPE_NOT_FRAG
+#define AVF_RX_PTYPE_FRG		AVF_RX_PTYPE_FRAG
+#define AVF_RX_PTYPE_INNER_PROT_TS	AVF_RX_PTYPE_INNER_PROT_TIMESYNC
+
+/* Lookup table mapping the HW PTYPE to the bit field for decoding */
+struct avf_rx_ptype_decoded avf_ptype_lookup[] = {
+	/* L2 Packet types */
+	AVF_PTT_UNUSED_ENTRY(0),
+	AVF_PTT(1,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(2,  L2, NONE, NOF, NONE, NONE, NOF, TS,   PAY2),
+	AVF_PTT(3,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(4),
+	AVF_PTT_UNUSED_ENTRY(5),
+	AVF_PTT(6,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(7,  L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT_UNUSED_ENTRY(8),
+	AVF_PTT_UNUSED_ENTRY(9),
+	AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+	AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
+	AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
+
+	/* Non Tunneled IPv4 */
+	AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(25),
+	AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv4 */
+	AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(32),
+	AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> IPv6 */
+	AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(39),
+	AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT */
+	AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> IPv4 */
+	AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(47),
+	AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> IPv6 */
+	AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(54),
+	AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC */
+	AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 --> GRE/NAT --> MAC --> IPv4 */
+	AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(62),
+	AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT -> MAC --> IPv6 */
+	AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(69),
+	AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv4 --> GRE/NAT --> MAC/VLAN */
+	AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(77),
+	AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(84),
+	AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* Non Tunneled IPv6 */
+	AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+	AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(91),
+	AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+	AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+	AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv4 */
+	AVF_PTT(95,  IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(96,  IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(97,  IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(98),
+	AVF_PTT(99,  IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> IPv6 */
+	AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(105),
+	AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT */
+	AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> IPv4 */
+	AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(113),
+	AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> IPv6 */
+	AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(120),
+	AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC */
+	AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv4 */
+	AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(128),
+	AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC -> IPv6 */
+	AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(135),
+	AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN */
+	AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
+	AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
+	AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
+	AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(143),
+	AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP,  PAY4),
+	AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
+	AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
+
+	/* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
+	AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
+	AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
+	AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP,  PAY4),
+	AVF_PTT_UNUSED_ENTRY(150),
+	AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP,  PAY4),
+	AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
+	AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
+
+	/* unused entries */
+	AVF_PTT_UNUSED_ENTRY(154),
+	AVF_PTT_UNUSED_ENTRY(155),
+	AVF_PTT_UNUSED_ENTRY(156),
+	AVF_PTT_UNUSED_ENTRY(157),
+	AVF_PTT_UNUSED_ENTRY(158),
+	AVF_PTT_UNUSED_ENTRY(159),
+
+	AVF_PTT_UNUSED_ENTRY(160),
+	AVF_PTT_UNUSED_ENTRY(161),
+	AVF_PTT_UNUSED_ENTRY(162),
+	AVF_PTT_UNUSED_ENTRY(163),
+	AVF_PTT_UNUSED_ENTRY(164),
+	AVF_PTT_UNUSED_ENTRY(165),
+	AVF_PTT_UNUSED_ENTRY(166),
+	AVF_PTT_UNUSED_ENTRY(167),
+	AVF_PTT_UNUSED_ENTRY(168),
+	AVF_PTT_UNUSED_ENTRY(169),
+
+	AVF_PTT_UNUSED_ENTRY(170),
+	AVF_PTT_UNUSED_ENTRY(171),
+	AVF_PTT_UNUSED_ENTRY(172),
+	AVF_PTT_UNUSED_ENTRY(173),
+	AVF_PTT_UNUSED_ENTRY(174),
+	AVF_PTT_UNUSED_ENTRY(175),
+	AVF_PTT_UNUSED_ENTRY(176),
+	AVF_PTT_UNUSED_ENTRY(177),
+	AVF_PTT_UNUSED_ENTRY(178),
+	AVF_PTT_UNUSED_ENTRY(179),
+
+	AVF_PTT_UNUSED_ENTRY(180),
+	AVF_PTT_UNUSED_ENTRY(181),
+	AVF_PTT_UNUSED_ENTRY(182),
+	AVF_PTT_UNUSED_ENTRY(183),
+	AVF_PTT_UNUSED_ENTRY(184),
+	AVF_PTT_UNUSED_ENTRY(185),
+	AVF_PTT_UNUSED_ENTRY(186),
+	AVF_PTT_UNUSED_ENTRY(187),
+	AVF_PTT_UNUSED_ENTRY(188),
+	AVF_PTT_UNUSED_ENTRY(189),
+
+	AVF_PTT_UNUSED_ENTRY(190),
+	AVF_PTT_UNUSED_ENTRY(191),
+	AVF_PTT_UNUSED_ENTRY(192),
+	AVF_PTT_UNUSED_ENTRY(193),
+	AVF_PTT_UNUSED_ENTRY(194),
+	AVF_PTT_UNUSED_ENTRY(195),
+	AVF_PTT_UNUSED_ENTRY(196),
+	AVF_PTT_UNUSED_ENTRY(197),
+	AVF_PTT_UNUSED_ENTRY(198),
+	AVF_PTT_UNUSED_ENTRY(199),
+
+	AVF_PTT_UNUSED_ENTRY(200),
+	AVF_PTT_UNUSED_ENTRY(201),
+	AVF_PTT_UNUSED_ENTRY(202),
+	AVF_PTT_UNUSED_ENTRY(203),
+	AVF_PTT_UNUSED_ENTRY(204),
+	AVF_PTT_UNUSED_ENTRY(205),
+	AVF_PTT_UNUSED_ENTRY(206),
+	AVF_PTT_UNUSED_ENTRY(207),
+	AVF_PTT_UNUSED_ENTRY(208),
+	AVF_PTT_UNUSED_ENTRY(209),
+
+	AVF_PTT_UNUSED_ENTRY(210),
+	AVF_PTT_UNUSED_ENTRY(211),
+	AVF_PTT_UNUSED_ENTRY(212),
+	AVF_PTT_UNUSED_ENTRY(213),
+	AVF_PTT_UNUSED_ENTRY(214),
+	AVF_PTT_UNUSED_ENTRY(215),
+	AVF_PTT_UNUSED_ENTRY(216),
+	AVF_PTT_UNUSED_ENTRY(217),
+	AVF_PTT_UNUSED_ENTRY(218),
+	AVF_PTT_UNUSED_ENTRY(219),
+
+	AVF_PTT_UNUSED_ENTRY(220),
+	AVF_PTT_UNUSED_ENTRY(221),
+	AVF_PTT_UNUSED_ENTRY(222),
+	AVF_PTT_UNUSED_ENTRY(223),
+	AVF_PTT_UNUSED_ENTRY(224),
+	AVF_PTT_UNUSED_ENTRY(225),
+	AVF_PTT_UNUSED_ENTRY(226),
+	AVF_PTT_UNUSED_ENTRY(227),
+	AVF_PTT_UNUSED_ENTRY(228),
+	AVF_PTT_UNUSED_ENTRY(229),
+
+	AVF_PTT_UNUSED_ENTRY(230),
+	AVF_PTT_UNUSED_ENTRY(231),
+	AVF_PTT_UNUSED_ENTRY(232),
+	AVF_PTT_UNUSED_ENTRY(233),
+	AVF_PTT_UNUSED_ENTRY(234),
+	AVF_PTT_UNUSED_ENTRY(235),
+	AVF_PTT_UNUSED_ENTRY(236),
+	AVF_PTT_UNUSED_ENTRY(237),
+	AVF_PTT_UNUSED_ENTRY(238),
+	AVF_PTT_UNUSED_ENTRY(239),
+
+	AVF_PTT_UNUSED_ENTRY(240),
+	AVF_PTT_UNUSED_ENTRY(241),
+	AVF_PTT_UNUSED_ENTRY(242),
+	AVF_PTT_UNUSED_ENTRY(243),
+	AVF_PTT_UNUSED_ENTRY(244),
+	AVF_PTT_UNUSED_ENTRY(245),
+	AVF_PTT_UNUSED_ENTRY(246),
+	AVF_PTT_UNUSED_ENTRY(247),
+	AVF_PTT_UNUSED_ENTRY(248),
+	AVF_PTT_UNUSED_ENTRY(249),
+
+	AVF_PTT_UNUSED_ENTRY(250),
+	AVF_PTT_UNUSED_ENTRY(251),
+	AVF_PTT_UNUSED_ENTRY(252),
+	AVF_PTT_UNUSED_ENTRY(253),
+	AVF_PTT_UNUSED_ENTRY(254),
+	AVF_PTT_UNUSED_ENTRY(255)
+};
+
+
+/**
+ * avf_validate_mac_addr - Validate unicast MAC address
+ * @mac_addr: pointer to MAC address
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address
+ **/
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+
+	DEBUGFUNC("avf_validate_mac_addr");
+
+	/* Broadcast addresses ARE multicast addresses
+	 * Make sure it is not a multicast address
+	 * Reject the zero address
+	 */
+	if (AVF_IS_MULTICAST(mac_addr) ||
+	    (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+	      mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0))
+		status = AVF_ERR_INVALID_MAC_ADDR;
+
+	return status;
+}
+
+/**
+ * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: ptr to register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to read the Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd_resp =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	if (reg_val == NULL)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read);
+
+	cmd_resp->address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*reg_val = LE32_TO_CPU(cmd_resp->value);
+
+	return status;
+}
+
+/**
+ * avf_read_rx_ctl - read from an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ **/
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+	u32 val = 0;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		val = rd32(hw, reg_addr);
+
+	return val;
+}
+
+/**
+ * avf_aq_rx_ctl_write_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Use the firmware to write to an Rx control register,
+ * especially useful if the Rx unit is under heavy pressure
+ **/
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_rx_ctl_reg_read_write *cmd =
+		(struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write);
+
+	cmd->address = CPU_TO_LE32(reg_addr);
+	cmd->value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_write_rx_ctl - write to an Rx control register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ **/
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	bool use_register;
+	int retry = 5;
+
+	use_register = (((hw->aq.api_maj_ver == 1) &&
+			(hw->aq.api_min_ver < 5)) ||
+			(hw->mac.type == AVF_MAC_X722));
+	if (!use_register) {
+do_retry:
+		status = avf_aq_rx_ctl_write_register(hw, reg_addr,
+						       reg_val, NULL);
+		if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) {
+			avf_msec_delay(1);
+			retry--;
+			goto do_retry;
+		}
+	}
+
+	/* if the AQ access failed, try the old-fashioned way */
+	if (status || use_register)
+		wr32(hw, reg_addr, reg_val);
+}
+
+/**
+ * avf_aq_set_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: new register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Write the external PHY register.
+ **/
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_set_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+	cmd->reg_value = CPU_TO_LE32(reg_val);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_phy_register
+ * @hw: pointer to the hw struct
+ * @phy_select: select which phy should be accessed
+ * @dev_addr: PHY device address
+ * @reg_addr: PHY register address
+ * @reg_val: read register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the external PHY register.
+ **/
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_phy_register_access *cmd =
+		(struct avf_aqc_phy_register_access *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_get_phy_register);
+
+	cmd->phy_interface = phy_select;
+	cmd->dev_addres = dev_addr;
+	cmd->reg_address = CPU_TO_LE32(reg_addr);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+	if (!status)
+		*reg_val = LE32_TO_CPU(cmd->reg_value);
+
+	return status;
+}
+
+
+/**
+ * avf_aq_send_msg_to_pf
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to PF driver using admin queue. By default, this message
+ * is sent asynchronously, i.e. avf_asq_send_command() does not wait for
+ * completion before returning.
+ **/
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_asq_cmd_details details;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI);
+	desc.cookie_high = CPU_TO_LE32(v_opcode);
+	desc.cookie_low = CPU_TO_LE32(v_retval);
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF
+						| AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+	}
+	if (!cmd_details) {
+		avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM);
+		details.async = true;
+		cmd_details = &details;
+	}
+	status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg,
+				       msglen, cmd_details);
+	return status;
+}
+
+/**
+ * avf_parse_hw_config
+ * @hw: pointer to the hardware structure
+ * @msg: pointer to the virtual channel VF resource structure
+ *
+ * Given a VF resource message from the PF, populate the hw struct
+ * with appropriate information.
+ **/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg)
+{
+	struct virtchnl_vsi_resource *vsi_res;
+	int i;
+
+	vsi_res = &msg->vsi_res[0];
+
+	hw->dev_caps.num_vsis = msg->num_vsis;
+	hw->dev_caps.num_rx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_tx_qp = msg->num_queue_pairs;
+	hw->dev_caps.num_msix_vectors_vf = msg->max_vectors;
+	hw->dev_caps.dcb = msg->vf_cap_flags &
+			   VIRTCHNL_VF_OFFLOAD_L2;
+	hw->dev_caps.iwarp = (msg->vf_cap_flags &
+			      VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0;
+	for (i = 0; i < msg->num_vsis; i++) {
+		if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) {
+			avf_memcpy(hw->mac.perm_addr,
+				    vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+			avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr,
+				    ETH_ALEN,
+				    AVF_NONDMA_TO_NONDMA);
+		}
+		vsi_res++;
+	}
+}
+
+/**
+ * avf_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a VF_RESET message to the PF. Does not wait for response from PF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the admin queue should be shut down and (optionally) reinitialized.
+ **/
+enum avf_status_code avf_reset(struct avf_hw *hw)
+{
+	return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
+				      AVF_SUCCESS, NULL, 0, NULL);
+}
+
+/**
+ * avf_aq_set_arp_proxy_config
+ * @hw: pointer to the HW structure
+ * @proxy_config: pointer to proxy config command table struct
+ * @cmd_details: pointer to command details
+ *
+ * Set ARP offload parameters from pre-populated
+ * avf_aqc_arp_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+				struct avf_aqc_arp_proxy_data *proxy_config,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!proxy_config)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+				  CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config));
+	desc.params.external.addr_low =
+				  CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, proxy_config,
+				       sizeof(struct avf_aqc_arp_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_opc_set_ns_proxy_table_entry
+ * @hw: pointer to the HW structure
+ * @ns_proxy_table_entry: pointer to NS table entry command struct
+ * @cmd_details: pointer to command details
+ *
+ * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters
+ * from pre-populated avf_aqc_ns_proxy_data struct
+ **/
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	if (!ns_proxy_table_entry)
+		return AVF_ERR_PARAM;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				avf_aqc_opc_set_ns_proxy_table_entry);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+	desc.params.external.addr_high =
+		CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry));
+	desc.params.external.addr_low =
+		CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry));
+	desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data));
+
+	status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry,
+				       sizeof(struct avf_aqc_ns_proxy_data),
+				       cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_set_clear_wol_filter
+ * @hw: pointer to the hw struct
+ * @filter_index: index of filter to modify (0-7)
+ * @filter: buffer containing filter to be set
+ * @set_filter: true to set filter, false to clear filter
+ * @no_wol_tco: if true, pass through packets cannot cause wake-up
+ *		if false, pass through packets may cause wake-up
+ * @filter_valid: true if filter action is valid
+ * @no_wol_tco_valid: true if no WoL in TCO traffic action valid
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set or clear WoL filter for port attached to the PF
+ **/
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+				u8 filter_index,
+				struct avf_aqc_set_wol_filter_data *filter,
+				bool set_filter, bool no_wol_tco,
+				bool filter_valid, bool no_wol_tco_valid,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_set_wol_filter *cmd =
+		(struct avf_aqc_set_wol_filter *)&desc.params.raw;
+	enum avf_status_code status;
+	u16 cmd_flags = 0;
+	u16 valid_flags = 0;
+	u16 buff_len = 0;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter);
+
+	if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS)
+		return  AVF_ERR_PARAM;
+	cmd->filter_index = CPU_TO_LE16(filter_index);
+
+	if (set_filter) {
+		if (!filter)
+			return  AVF_ERR_PARAM;
+
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER;
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR;
+	}
+
+	if (no_wol_tco)
+		cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL;
+	cmd->cmd_flags = CPU_TO_LE16(cmd_flags);
+
+	if (filter_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID;
+	if (no_wol_tco_valid)
+		valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID;
+	cmd->valid_flags = CPU_TO_LE16(valid_flags);
+
+	buff_len = sizeof(*filter);
+	desc.datalen = CPU_TO_LE16(buff_len);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD);
+
+	cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter));
+	cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter));
+
+	status = avf_asq_send_command(hw, &desc, filter,
+				       buff_len, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_get_wake_event_reason
+ * @hw: pointer to the hw struct
+ * @wake_reason: return value, index of matching filter
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Get information for the reason of a Wake Up event
+ **/
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+				u16 *wake_reason,
+				struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_wake_reason_completion *resp =
+		(struct avf_aqc_get_wake_reason_completion *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	if (status == AVF_SUCCESS)
+		*wake_reason = LE16_TO_CPU(resp->wake_reason);
+
+	return status;
+}
+
+/**
+* avf_aq_clear_all_wol_filters
+* @hw: pointer to the hw struct
+* @cmd_details: pointer to command details structure or NULL
+*
+* Get information for the reason of a Wake Up event
+**/
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+	struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+					  avf_aqc_opc_clear_all_wol_filters);
+
+	status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_aq_write_ddp - Write dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @track_id: package tracking id
+ * @error_offset: returns error offset
+ * @error_info: returns error information
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+				   u16 buff_size, u32 track_id,
+				   u32 *error_offset, u32 *error_info,
+				   struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_write_personalization_profile *cmd =
+		(struct avf_aqc_write_personalization_profile *)
+		&desc.params.raw;
+	struct avf_aqc_write_ddp_resp *resp;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+				  avf_aqc_opc_write_personalization_profile);
+
+	desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->profile_track_id = CPU_TO_LE32(track_id);
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+	if (!status) {
+		resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw;
+		if (error_offset)
+			*error_offset = LE32_TO_CPU(resp->error_offset);
+		if (error_info)
+			*error_info = LE32_TO_CPU(resp->error_info);
+	}
+
+	return status;
+}
+
+/**
+ * avf_aq_get_ddp_list - Read dynamic device personalization (ddp)
+ * @hw: pointer to the hw struct
+ * @buff: command buffer (size in bytes = buff_size)
+ * @buff_size: buffer size in bytes
+ * @flags: AdminQ command flags
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum
+avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+				      u16 buff_size, u8 flags,
+				      struct avf_asq_cmd_details *cmd_details)
+{
+	struct avf_aq_desc desc;
+	struct avf_aqc_get_applied_profiles *cmd =
+		(struct avf_aqc_get_applied_profiles *)&desc.params.raw;
+	enum avf_status_code status;
+
+	avf_fill_default_direct_cmd_desc(&desc,
+			  avf_aqc_opc_get_personalization_profile_list);
+
+	desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF);
+	if (buff_size > AVF_AQ_LARGE_BUF)
+		desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+	desc.datalen = CPU_TO_LE16(buff_size);
+
+	cmd->flags = flags;
+
+	status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details);
+
+	return status;
+}
+
+/**
+ * avf_find_segment_in_package
+ * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF)
+ * @pkg_hdr: pointer to the package header to be searched
+ *
+ * This function searches a package file for a particular segment type. On
+ * success it returns a pointer to the segment header, otherwise it will
+ * return NULL.
+ **/
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_hdr)
+{
+	struct avf_generic_seg_header *segment;
+	u32 i;
+
+	/* Search all package segments for the requested segment type */
+	for (i = 0; i < pkg_hdr->segment_count; i++) {
+		segment =
+			(struct avf_generic_seg_header *)((u8 *)pkg_hdr +
+			 pkg_hdr->segment_offset[i]);
+
+		if (segment->type == segment_type)
+			return segment;
+	}
+
+	return NULL;
+}
+
+/* Get section table in profile */
+#define AVF_SECTION_TABLE(profile, sec_tbl)				\
+	do {								\
+		struct avf_profile_segment *p = (profile);		\
+		u32 count;						\
+		u32 *nvm;						\
+		count = p->device_table_count;				\
+		nvm = (u32 *)&p->device_table[count];			\
+		sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \
+	} while (0)
+
+/* Get section header in profile */
+#define AVF_SECTION_HEADER(profile, offset)				\
+	(struct avf_profile_section_header *)((u8 *)(profile) + (offset))
+
+/**
+ * avf_find_section_in_profile
+ * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE)
+ * @profile: pointer to the avf segment header to be searched
+ *
+ * This function searches avf segment for a particular section type. On
+ * success it returns a pointer to the section header, otherwise it will
+ * return NULL.
+ **/
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile)
+{
+	struct avf_profile_section_header *sec;
+	struct avf_section_table *sec_tbl;
+	u32 sec_off;
+	u32 i;
+
+	if (profile->header.type != SEGMENT_TYPE_AVF)
+		return NULL;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (sec->section.type == section_type)
+			return sec;
+	}
+
+	return NULL;
+}
+
+/**
+ * avf_ddp_exec_aq_section - Execute generic AQ for DDP
+ * @hw: pointer to the hw struct
+ * @aq: command buffer containing all data to execute AQ
+ **/
+STATIC enum
+avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw,
+					  struct avf_profile_aq_section *aq)
+{
+	enum avf_status_code status;
+	struct avf_aq_desc desc;
+	u8 *msg = NULL;
+	u16 msglen;
+
+	avf_fill_default_direct_cmd_desc(&desc, aq->opcode);
+	desc.flags |= CPU_TO_LE16(aq->flags);
+	avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	msglen = aq->datalen;
+	if (msglen) {
+		desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF |
+						AVF_AQ_FLAG_RD));
+		if (msglen > AVF_AQ_LARGE_BUF)
+			desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB);
+		desc.datalen = CPU_TO_LE16(msglen);
+		msg = &aq->data[0];
+	}
+
+	status = avf_asq_send_command(hw, &desc, msg, msglen, NULL);
+
+	if (status != AVF_SUCCESS) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "unable to exec DDP AQ opcode %u, error %d\n",
+			   aq->opcode, status);
+		return status;
+	}
+
+	/* copy returned desc to aq_buf */
+	avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw),
+		    AVF_NONDMA_TO_NONDMA);
+
+	return AVF_SUCCESS;
+}
+
+/**
+ * avf_validate_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be validated
+ * @track_id: package tracking id
+ * @rollback: flag if the profile is for rollback.
+ *
+ * Validates supported devices and profile's sections.
+ */
+STATIC enum avf_status_code
+avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id, bool rollback)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 vendor_dev_id;
+	u32 dev_cnt;
+	u32 sec_off;
+	u32 i;
+
+	if (track_id == AVF_DDP_TRACKID_INVALID) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n");
+		return AVF_NOT_SUPPORTED;
+	}
+
+	dev_cnt = profile->device_table_count;
+	for (i = 0; i < dev_cnt; i++) {
+		vendor_dev_id = profile->device_table[i].vendor_dev_id;
+		if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID &&
+		    hw->device_id == (vendor_dev_id & 0xFFFF))
+			break;
+	}
+	if (dev_cnt && (i == dev_cnt)) {
+		avf_debug(hw, AVF_DEBUG_PACKAGE,
+			   "Device doesn't support DDP\n");
+		return AVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* Validate sections types */
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		if (rollback) {
+			if (sec->section.type == SECTION_TYPE_MMIO ||
+			    sec->section.type == SECTION_TYPE_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_AQ) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not a roll-back package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		} else {
+			if (sec->section.type == SECTION_TYPE_RB_AQ ||
+			    sec->section.type == SECTION_TYPE_RB_MMIO) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Not an original package\n");
+				return AVF_NOT_SUPPORTED;
+			}
+		}
+	}
+
+	return status;
+}
+
+/**
+ * avf_write_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be downloaded
+ * @track_id: package tracking id
+ *
+ * Handles the download of a complete package.
+ */
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		   u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_aq_section *ddp_aq;
+	u32 section_size = 0;
+	u32 offset = 0, info = 0;
+	u32 sec_off;
+	u32 i;
+
+	status = avf_validate_profile(hw, profile, track_id, false);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	for (i = 0; i < sec_tbl->section_count; i++) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+		/* Process generic admin command */
+		if (sec->section.type == SECTION_TYPE_AQ) {
+			ddp_aq = (struct avf_profile_aq_section *)&sec[1];
+			status = avf_ddp_exec_aq_section(hw, ddp_aq);
+			if (status) {
+				avf_debug(hw, AVF_DEBUG_PACKAGE,
+					   "Failed to execute aq: section %d, opcode %u\n",
+					   i, ddp_aq->opcode);
+				break;
+			}
+			sec->section.type = SECTION_TYPE_RB_AQ;
+		}
+
+		/* Skip any non-mmio sections */
+		if (sec->section.type != SECTION_TYPE_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_rollback_profile
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package to be removed
+ * @track_id: package tracking id
+ *
+ * Rolls back previously loaded package.
+ */
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile,
+		      u32 track_id)
+{
+	struct avf_profile_section_header *sec = NULL;
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_section_table *sec_tbl;
+	u32 offset = 0, info = 0;
+	u32 section_size = 0;
+	u32 sec_off;
+	int i;
+
+	status = avf_validate_profile(hw, profile, track_id, true);
+	if (status)
+		return status;
+
+	AVF_SECTION_TABLE(profile, sec_tbl);
+
+	/* For rollback write sections in reverse */
+	for (i = sec_tbl->section_count - 1; i >= 0; i--) {
+		sec_off = sec_tbl->section_offset[i];
+		sec = AVF_SECTION_HEADER(profile, sec_off);
+
+		/* Skip any non-rollback sections */
+		if (sec->section.type != SECTION_TYPE_RB_MMIO)
+			continue;
+
+		section_size = sec->section.size +
+			sizeof(struct avf_profile_section_header);
+
+		/* Write roll-back MMIO section */
+		status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size,
+					   track_id, &offset, &info, NULL);
+		if (status) {
+			avf_debug(hw, AVF_DEBUG_PACKAGE,
+				   "Failed to write profile: section %d, offset %d, info %d\n",
+				   i, offset, info);
+			break;
+		}
+	}
+	return status;
+}
+
+/**
+ * avf_add_pinfo_to_list
+ * @hw: pointer to the hardware structure
+ * @profile: pointer to the profile segment of the package
+ * @profile_info_sec: buffer for information section
+ * @track_id: package tracking id
+ *
+ * Register a profile to the list of loaded profiles.
+ */
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id)
+{
+	enum avf_status_code status = AVF_SUCCESS;
+	struct avf_profile_section_header *sec = NULL;
+	struct avf_profile_info *pinfo;
+	u32 offset = 0, info = 0;
+
+	sec = (struct avf_profile_section_header *)profile_info_sec;
+	sec->tbl_size = 1;
+	sec->data_end = sizeof(struct avf_profile_section_header) +
+			sizeof(struct avf_profile_info);
+	sec->section.type = SECTION_TYPE_INFO;
+	sec->section.offset = sizeof(struct avf_profile_section_header);
+	sec->section.size = sizeof(struct avf_profile_info);
+	pinfo = (struct avf_profile_info *)(profile_info_sec +
+					     sec->section.offset);
+	pinfo->track_id = track_id;
+	pinfo->version = profile->version;
+	pinfo->op = AVF_DDP_ADD_TRACKID;
+	avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE,
+		    AVF_NONDMA_TO_NONDMA);
+
+	status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end,
+				   track_id, &offset, &info, NULL);
+	return status;
+}
diff --git a/drivers/net/avf/base/avf_devids.h b/drivers/net/avf/base/avf_devids.h
new file mode 100644
index 0000000..7d9fed2
--- /dev/null
+++ b/drivers/net/avf/base/avf_devids.h
@@ -0,0 +1,43 @@
+/*******************************************************************************
+
+Copyright (c) 2017, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_DEVIDS_H_
+#define _AVF_DEVIDS_H_
+
+/* Vendor ID */
+#define AVF_INTEL_VENDOR_ID		0x8086
+
+/* Device IDs */
+#define AVF_DEV_ID_ADAPTIVE_VF		0x1889
+
+#endif /* _AVF_DEVIDS_H_ */
diff --git a/drivers/net/avf/base/avf_hmc.h b/drivers/net/avf/base/avf_hmc.h
new file mode 100644
index 0000000..b9b7b5b
--- /dev/null
+++ b/drivers/net/avf/base/avf_hmc.h
@@ -0,0 +1,245 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_HMC_H_
+#define _AVF_HMC_H_
+
+#define AVF_HMC_MAX_BP_COUNT 512
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+#define AVF_HMC_INFO_SIGNATURE		0x484D5347 /* HMSG */
+#define AVF_HMC_PD_CNT_IN_SD		512
+#define AVF_HMC_DIRECT_BP_SIZE		0x200000 /* 2M */
+#define AVF_HMC_PAGED_BP_SIZE		4096
+#define AVF_HMC_PD_BP_BUF_ALIGNMENT	4096
+#define AVF_FIRST_VF_FPM_ID		16
+
+struct avf_hmc_obj_info {
+	u64 base;	/* base addr in FPM */
+	u32 max_cnt;	/* max count available for this hmc func */
+	u32 cnt;	/* count of objects driver actually wants to create */
+	u64 size;	/* size in bytes of one object */
+};
+
+enum avf_sd_entry_type {
+	AVF_SD_TYPE_INVALID = 0,
+	AVF_SD_TYPE_PAGED   = 1,
+	AVF_SD_TYPE_DIRECT  = 2
+};
+
+struct avf_hmc_bp {
+	enum avf_sd_entry_type entry_type;
+	struct avf_dma_mem addr; /* populate to be used by hw */
+	u32 sd_pd_index;
+	u32 ref_cnt;
+};
+
+struct avf_hmc_pd_entry {
+	struct avf_hmc_bp bp;
+	u32 sd_index;
+	bool rsrc_pg;
+	bool valid;
+};
+
+struct avf_hmc_pd_table {
+	struct avf_dma_mem pd_page_addr; /* populate to be used by hw */
+	struct avf_hmc_pd_entry  *pd_entry; /* [512] for sw book keeping */
+	struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */
+
+	u32 ref_cnt;
+	u32 sd_index;
+};
+
+struct avf_hmc_sd_entry {
+	enum avf_sd_entry_type entry_type;
+	bool valid;
+
+	union {
+		struct avf_hmc_pd_table pd_table;
+		struct avf_hmc_bp bp;
+	} u;
+};
+
+struct avf_hmc_sd_table {
+	struct avf_virt_mem addr; /* used to track sd_entry allocations */
+	u32 sd_cnt;
+	u32 ref_cnt;
+	struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */
+};
+
+struct avf_hmc_info {
+	u32 signature;
+	/* equals to pci func num for PF and dynamically allocated for VFs */
+	u8 hmc_fn_id;
+	u16 first_sd_index; /* index of the first available SD */
+
+	/* hmc objects */
+	struct avf_hmc_obj_info *hmc_obj;
+	struct avf_virt_mem hmc_obj_virt_mem;
+	struct avf_hmc_sd_table sd_table;
+};
+
+#define AVF_INC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt++)
+#define AVF_INC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt++)
+#define AVF_INC_BP_REFCNT(bp)		((bp)->ref_cnt++)
+
+#define AVF_DEC_SD_REFCNT(sd_table)	((sd_table)->ref_cnt--)
+#define AVF_DEC_PD_REFCNT(pd_table)	((pd_table)->ref_cnt--)
+#define AVF_DEC_BP_REFCNT(bp)		((bp)->ref_cnt--)
+
+/**
+ * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware
+ * @hw: pointer to our hw struct
+ * @pa: pointer to physical address
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type)			\
+{									\
+	u32 val1, val2, val3;						\
+	val1 = (u32)(AVF_HI_DWORD(pa));				\
+	val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT <<			\
+		 AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) |			\
+		BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT);		\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, val1);			\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_index: segment descriptor index
+ * @type: if sd entry is direct or paged
+ **/
+#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type)			\
+{									\
+	u32 val2, val3;							\
+	val2 = (AVF_HMC_MAX_BP_COUNT <<				\
+		AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) |		\
+		((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) <<		\
+		AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT);			\
+	val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT);	\
+	wr32((hw), AVF_PFHMC_SDDATAHIGH, 0);				\
+	wr32((hw), AVF_PFHMC_SDDATALOW, val2);				\
+	wr32((hw), AVF_PFHMC_SDCMD, val3);				\
+}
+
+/**
+ * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware
+ * @hw: pointer to our hw struct
+ * @sd_idx: segment descriptor index
+ * @pd_idx: page descriptor index
+ **/
+#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx)			\
+	wr32((hw), AVF_PFHMC_PDINV,					\
+	    (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) |		\
+	     ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT)))
+
+/**
+ * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information structure
+ * @type: type of HMC resources we're searching
+ * @index: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @sd_idx: pointer to return index of the segment descriptor in question
+ * @sd_limit: pointer to return the maximum number of segment descriptors
+ *
+ * This function calculates the segment descriptor index and index limit
+ * for the resource defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\
+{									\
+	u64 fpm_addr, fpm_limit;					\
+	fpm_addr = (hmc_info)->hmc_obj[(type)].base +			\
+		   (hmc_info)->hmc_obj[(type)].size * (index);		\
+	fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\
+	*(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE);		\
+	*(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(sd_limit) += 1;						\
+}
+
+/**
+ * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit
+ * @hmc_info: pointer to the HMC configuration information struct
+ * @type: HMC resource type we're examining
+ * @idx: starting index for the object
+ * @cnt: number of objects we're trying to create
+ * @pd_index: pointer to return page descriptor index
+ * @pd_limit: pointer to return page descriptor index limit
+ *
+ * Calculates the page descriptor index and index limit for the resource
+ * defined by avf_hmc_rsrc_type.
+ **/
+#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\
+{									\
+	u64 fpm_adr, fpm_limit;						\
+	fpm_adr = (hmc_info)->hmc_obj[(type)].base +			\
+		  (hmc_info)->hmc_obj[(type)].size * (idx);		\
+	fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt);	\
+	*(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE);		\
+	*(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE);	\
+	/* add one more to the limit to correct our range */		\
+	*(pd_limit) += 1;						\
+}
+enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 sd_index,
+					      enum avf_sd_entry_type type,
+					      u64 direct_mode_sz);
+
+enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 pd_index,
+					      struct avf_dma_mem *rsrc_pg);
+enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw,
+					struct avf_hmc_info *hmc_info,
+					u32 idx);
+enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info,
+					     u32 idx);
+enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw,
+					    struct avf_hmc_info *hmc_info,
+					    u32 idx, bool is_pf);
+enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info,
+					       u32 idx);
+enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw,
+					      struct avf_hmc_info *hmc_info,
+					      u32 idx, bool is_pf);
+
+#endif /* _AVF_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_lan_hmc.h b/drivers/net/avf/base/avf_lan_hmc.h
new file mode 100644
index 0000000..48805d8
--- /dev/null
+++ b/drivers/net/avf/base/avf_lan_hmc.h
@@ -0,0 +1,200 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_LAN_HMC_H_
+#define _AVF_LAN_HMC_H_
+
+/* forward-declare the HW struct for the compiler */
+struct avf_hw;
+
+/* HMC element context information */
+
+/* Rx queue context data
+ *
+ * The sizes of the variables may be larger than needed due to crossing byte
+ * boundaries. If we do not have the width of the variable set to the correct
+ * size then we could end up shifting bits off the top of the variable when the
+ * variable is at the top of a byte and crosses over into the next byte.
+ */
+struct avf_hmc_obj_rxq {
+	u16 head;
+	u16 cpuid; /* bigger than needed, see above for reason */
+	u64 base;
+	u16 qlen;
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+	u16 dbuff; /* bigger than needed, see above for reason */
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+	u16 hbuff; /* bigger than needed, see above for reason */
+	u8  dtype;
+	u8  dsize;
+	u8  crcstrip;
+	u8  fc_ena;
+	u8  l2tsel;
+	u8  hsplit_0;
+	u8  hsplit_1;
+	u8  showiv;
+	u32 rxmax; /* bigger than needed, see above for reason */
+	u8  tphrdesc_ena;
+	u8  tphwdesc_ena;
+	u8  tphdata_ena;
+	u8  tphhead_ena;
+	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8  prefena;	/* NOTE: normally must be set to 1 at init */
+};
+
+/* Tx queue context data
+*
+* The sizes of the variables may be larger than needed due to crossing byte
+* boundaries. If we do not have the width of the variable set to the correct
+* size then we could end up shifting bits off the top of the variable when the
+* variable is at the top of a byte and crosses over into the next byte.
+*/
+struct avf_hmc_obj_txq {
+	u16 head;
+	u8  new_context;
+	u64 base;
+	u8  fc_ena;
+	u8  timesync_ena;
+	u8  fd_ena;
+	u8  alt_vlan_ena;
+	u16 thead_wb;
+	u8  cpuid;
+	u8  head_wb_ena;
+	u16 qlen;
+	u8  tphrdesc_ena;
+	u8  tphrpacket_ena;
+	u8  tphwdesc_ena;
+	u64 head_wb_addr;
+	u32 crc;
+	u16 rdylist;
+	u8  rdylist_act;
+};
+
+/* for hsplit_0 field of Rx HMC context */
+enum avf_hmc_obj_rx_hsplit_0 {
+	AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT      = 0,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2      = 1,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP      = 2,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4,
+	AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP    = 8,
+};
+
+/* fcoe_cntx and fcoe_filt are for debugging purpose only */
+struct avf_hmc_obj_fcoe_cntx {
+	u32 rsv[32];
+};
+
+struct avf_hmc_obj_fcoe_filt {
+	u32 rsv[8];
+};
+
+/* Context sizes for LAN objects */
+enum avf_hmc_lan_object_size {
+	AVF_HMC_LAN_OBJ_SZ_8   = 0x3,
+	AVF_HMC_LAN_OBJ_SZ_16  = 0x4,
+	AVF_HMC_LAN_OBJ_SZ_32  = 0x5,
+	AVF_HMC_LAN_OBJ_SZ_64  = 0x6,
+	AVF_HMC_LAN_OBJ_SZ_128 = 0x7,
+	AVF_HMC_LAN_OBJ_SZ_256 = 0x8,
+	AVF_HMC_LAN_OBJ_SZ_512 = 0x9,
+};
+
+#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512
+#define AVF_HMC_OBJ_SIZE_TXQ         128
+#define AVF_HMC_OBJ_SIZE_RXQ         32
+#define AVF_HMC_OBJ_SIZE_FCOE_CNTX   64
+#define AVF_HMC_OBJ_SIZE_FCOE_FILT   64
+
+enum avf_hmc_lan_rsrc_type {
+	AVF_HMC_LAN_FULL  = 0,
+	AVF_HMC_LAN_TX    = 1,
+	AVF_HMC_LAN_RX    = 2,
+	AVF_HMC_FCOE_CTX  = 3,
+	AVF_HMC_FCOE_FILT = 4,
+	AVF_HMC_LAN_MAX   = 5
+};
+
+enum avf_hmc_model {
+	AVF_HMC_MODEL_DIRECT_PREFERRED = 0,
+	AVF_HMC_MODEL_DIRECT_ONLY      = 1,
+	AVF_HMC_MODEL_PAGED_ONLY       = 2,
+	AVF_HMC_MODEL_UNKNOWN,
+};
+
+struct avf_hmc_lan_create_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+	enum avf_sd_entry_type entry_type;
+	u64 direct_mode_sz;
+};
+
+struct avf_hmc_lan_delete_obj_info {
+	struct avf_hmc_info *hmc_info;
+	u32 rsrc_type;
+	u32 start_idx;
+	u32 count;
+};
+
+enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num,
+					u32 rxq_num, u32 fcoe_cntx_num,
+					u32 fcoe_filt_num);
+enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw,
+					     enum avf_hmc_model model);
+enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw);
+
+u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num,
+			      u32 fcoe_cntx_num, u32 fcoe_filt_num);
+enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_txq *s);
+enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw,
+						      u16 queue);
+enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw,
+						    u16 queue,
+						    struct avf_hmc_obj_rxq *s);
+enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_create_obj_info *info);
+enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw,
+				struct avf_hmc_lan_delete_obj_info *info);
+
+#endif /* _AVF_LAN_HMC_H_ */
diff --git a/drivers/net/avf/base/avf_osdep.h b/drivers/net/avf/base/avf_osdep.h
new file mode 100644
index 0000000..2f46bb2
--- /dev/null
+++ b/drivers/net/avf/base/avf_osdep.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_OSDEP_H_
+#define _AVF_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdarg.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_io.h>
+
+#include "../avf_log.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN                  6
+#endif
+
+#ifndef __le16
+#define __le16          uint16_t
+#endif
+#ifndef __le32
+#define __le32          uint32_t
+#endif
+#ifndef __le64
+#define __le64          uint64_t
+#endif
+#ifndef __be16
+#define __be16          uint16_t
+#endif
+#ifndef __be32
+#define __be32          uint32_t
+#endif
+#ifndef __be64
+#define __be64          uint64_t
+#endif
+
+#define FALSE           0
+#define TRUE            1
+#define false           0
+#define true            1
+
+#define min(a,b) RTE_MIN(a,b)
+#define max(a,b) RTE_MAX(a,b)
+
+#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+#define ASSERT(x) if(!(x)) rte_panic("AVF: x")
+
+#define DEBUGOUT(S)             PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...)      PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F)            DEBUGOUT(F "\n")
+
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define cpu_to_le16(o) rte_cpu_to_le_16(o)
+#define cpu_to_le32(s) rte_cpu_to_le_32(s)
+#define cpu_to_le64(h) rte_cpu_to_le_64(h)
+#define le16_to_cpu(a) rte_le_to_cpu_16(a)
+#define le32_to_cpu(c) rte_le_to_cpu_32(c)
+#define le64_to_cpu(k) rte_le_to_cpu_64(k)
+
+#define avf_memset(a, b, c, d) memset((a), (b), (c))
+#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+
+#define avf_usec_delay(x) rte_delay_us(x)
+#define avf_msec_delay(x) rte_delay_us(1000*(x))
+
+#define AVF_PCI_REG(reg)		rte_read32(reg)
+#define AVF_PCI_REG_ADDR(a, reg) \
+	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+
+#define AVF_PCI_REG_WRITE(reg, value)		\
+	rte_write32((rte_cpu_to_le_32(value)), reg)
+#define AVF_PCI_REG_WRITE_RELAXED(reg, value)	\
+	rte_write32_relaxed((rte_cpu_to_le_32(value)), reg)
+static inline
+uint32_t avf_read_addr(volatile void *addr)
+{
+	return rte_le_to_cpu_32(AVF_PCI_REG(addr));
+}
+
+#define AVF_READ_REG(hw, reg) \
+	avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg)))
+#define AVF_WRITE_REG(hw, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value))
+#define AVF_WRITE_FLUSH(a) \
+	AVF_READ_REG(a, AVFGEN_RSTAT)
+
+#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+	AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value))
+
+#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0]))
+
+#define avf_debug(h, m, s, ...)                                \
+do {                                                            \
+	if (((m) & (h)->debug_mask))                            \
+		PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s,       \
+			(h)->bus.device, (h)->bus.func,         \
+					##__VA_ARGS__);         \
+} while (0)
+
+/* memory allocation tracking */
+struct avf_dma_mem {
+	void *va;
+	u64 pa;
+	u32 size;
+	const void *zone;
+} __attribute__((packed));
+
+struct avf_virt_mem {
+	void *va;
+	u32 size;
+} __attribute__((packed));
+
+/* SW spinlock */
+struct avf_spinlock {
+	rte_spinlock_t spinlock;
+};
+
+#define avf_allocate_dma_mem(h, m, unused, s, a) \
+			avf_allocate_dma_mem_d(h, m, s, a)
+#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m)
+
+#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s)
+#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m)
+
+#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp)
+#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp)
+#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp)
+#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp)
+
+#endif /* _AVF_OSDEP_H_ */
diff --git a/drivers/net/avf/base/avf_prototype.h b/drivers/net/avf/base/avf_prototype.h
new file mode 100644
index 0000000..de031dc
--- /dev/null
+++ b/drivers/net/avf/base/avf_prototype.h
@@ -0,0 +1,206 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_PROTOTYPE_H_
+#define _AVF_PROTOTYPE_H_
+
+#include "avf_type.h"
+#include "avf_alloc.h"
+#include "virtchnl.h"
+
+/* Prototypes for shared code functions that are not in
+ * the standard function pointer structures.  These are
+ * mostly because they are needed even before the init
+ * has happened and will assist in the early SW and FW
+ * setup.
+ */
+
+/* adminq functions */
+enum avf_status_code avf_init_adminq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw);
+enum avf_status_code avf_init_asq(struct avf_hw *hw);
+enum avf_status_code avf_init_arq(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw);
+enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_asq(struct avf_hw *hw);
+enum avf_status_code avf_shutdown_arq(struct avf_hw *hw);
+u16 avf_clean_asq(struct avf_hw *hw);
+void avf_free_adminq_asq(struct avf_hw *hw);
+void avf_free_adminq_arq(struct avf_hw *hw);
+enum avf_status_code avf_validate_mac_addr(u8 *mac_addr);
+void avf_adminq_init_ring_data(struct avf_hw *hw);
+enum avf_status_code avf_clean_arq_element(struct avf_hw *hw,
+					     struct avf_arq_event_info *e,
+					     u16 *events_pending);
+enum avf_status_code avf_asq_send_command(struct avf_hw *hw,
+				struct avf_aq_desc *desc,
+				void *buff, /* can be NULL */
+				u16  buff_size,
+				struct avf_asq_cmd_details *cmd_details);
+bool avf_asq_done(struct avf_hw *hw);
+
+/* debug function for adminq */
+void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask,
+		   void *desc, void *buffer, u16 buf_len);
+
+void avf_idle_aq(struct avf_hw *hw);
+bool avf_check_asq_alive(struct avf_hw *hw);
+enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading);
+
+enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid,
+					  bool pf_lut, u8 *lut, u16 lut_size);
+enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw,
+				     u16 seid,
+				     struct avf_aqc_get_set_rss_key_data *key);
+const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err);
+const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err);
+
+
+enum avf_status_code avf_set_mac_type(struct avf_hw *hw);
+
+extern struct avf_rx_ptype_decoded avf_ptype_lookup[];
+
+STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
+{
+	return avf_ptype_lookup[ptype];
+}
+
+/* prototype for functions used for SW spinlocks */
+void avf_init_spinlock(struct avf_spinlock *sp);
+void avf_acquire_spinlock(struct avf_spinlock *sp);
+void avf_release_spinlock(struct avf_spinlock *sp);
+void avf_destroy_spinlock(struct avf_spinlock *sp);
+
+/* avf_common for VF drivers*/
+void avf_parse_hw_config(struct avf_hw *hw,
+			     struct virtchnl_vf_resource *msg);
+enum avf_status_code avf_reset(struct avf_hw *hw);
+enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw,
+				enum virtchnl_ops v_opcode,
+				enum avf_status_code v_retval,
+				u8 *msg, u16 msglen,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_set_filter_control(struct avf_hw *hw,
+				struct avf_filter_control_settings *settings);
+enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw,
+				u8 *mac_addr, u16 ethtype, u16 flags,
+				u16 vsi_seid, u16 queue, bool is_add,
+				struct avf_control_filter_stats *stats,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id,
+				u8 table_id, u32 start_index, u16 buff_size,
+				void *buff, u16 *ret_buff_size,
+				u8 *ret_next_table, u32 *ret_next_index,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw,
+						    u16 vsi_seid);
+enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr);
+enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val);
+enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw,
+				u8 phy_select, u8 dev_addr,
+				u32 reg_addr, u32 *reg_val,
+				struct avf_asq_cmd_details *cmd_details);
+
+enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw,
+			struct avf_aqc_arp_proxy_data *proxy_config,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw,
+			struct avf_aqc_ns_proxy_data *ns_proxy_table_entry,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw,
+			u8 filter_index,
+			struct avf_aqc_set_wol_filter_data *filter,
+			bool set_filter, bool no_wol_tco,
+			bool filter_valid, bool no_wol_tco_valid,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw,
+			u16 *wake_reason,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw,
+			struct avf_asq_cmd_details *cmd_details);
+enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw,
+					u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+enum avf_status_code avf_read_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 *value);
+enum avf_status_code avf_write_phy_register(struct avf_hw *hw,
+				u8 page, u16 reg, u8 phy_addr, u16 value);
+u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num);
+enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw,
+					      u32 time, u32 interval);
+enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff,
+					u16 buff_size, u32 track_id,
+					u32 *error_offset, u32 *error_info,
+					struct avf_asq_cmd_details *
+					cmd_details);
+enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff,
+					   u16 buff_size, u8 flags,
+					   struct avf_asq_cmd_details *
+					   cmd_details);
+struct avf_generic_seg_header *
+avf_find_segment_in_package(u32 segment_type,
+			     struct avf_package_header *pkg_header);
+struct avf_profile_section_header *
+avf_find_section_in_profile(u32 section_type,
+			     struct avf_profile_segment *profile);
+enum avf_status_code
+avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		   u32 track_id);
+enum avf_status_code
+avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg,
+		      u32 track_id);
+enum avf_status_code
+avf_add_pinfo_to_list(struct avf_hw *hw,
+		       struct avf_profile_segment *profile,
+		       u8 *profile_info_sec, u32 track_id);
+#endif /* _AVF_PROTOTYPE_H_ */
diff --git a/drivers/net/avf/base/avf_register.h b/drivers/net/avf/base/avf_register.h
new file mode 100644
index 0000000..ba5a9f3
--- /dev/null
+++ b/drivers/net/avf/base/avf_register.h
@@ -0,0 +1,346 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_REGISTER_H_
+#define _AVF_REGISTER_H_
+
+
+#define AVFMSIX_PBA1(_i)          (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */
+#define AVFMSIX_PBA1_MAX_INDEX    19
+#define AVFMSIX_PBA1_PENBIT_SHIFT 0
+#define AVFMSIX_PBA1_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT)
+#define AVFMSIX_TADD1(_i)              (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TADD1_MAX_INDEX        639
+#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD1_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD1_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD1_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG1(_i)            (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG1_MAX_INDEX      639
+#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG1_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD1(_i)             (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD1_MAX_INDEX       639
+#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD1_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL1(_i)        (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL1_MAX_INDEX  639
+#define AVFMSIX_TVCTRL1_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL1_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT)
+#define AVF_ARQBAH1              0x00006000 /* Reset: EMPR */
+#define AVF_ARQBAH1_ARQBAH_SHIFT 0
+#define AVF_ARQBAH1_ARQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT)
+#define AVF_ARQBAL1              0x00006C00 /* Reset: EMPR */
+#define AVF_ARQBAL1_ARQBAL_SHIFT 0
+#define AVF_ARQBAL1_ARQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT)
+#define AVF_ARQH1            0x00007400 /* Reset: EMPR */
+#define AVF_ARQH1_ARQH_SHIFT 0
+#define AVF_ARQH1_ARQH_MASK  AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT)
+#define AVF_ARQLEN1                 0x00008000 /* Reset: EMPR */
+#define AVF_ARQLEN1_ARQLEN_SHIFT    0
+#define AVF_ARQLEN1_ARQLEN_MASK     AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT)
+#define AVF_ARQLEN1_ARQVFE_SHIFT    28
+#define AVF_ARQLEN1_ARQVFE_MASK     AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT)
+#define AVF_ARQLEN1_ARQOVFL_SHIFT   29
+#define AVF_ARQLEN1_ARQOVFL_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT)
+#define AVF_ARQLEN1_ARQCRIT_SHIFT   30
+#define AVF_ARQLEN1_ARQCRIT_MASK    AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT)
+#define AVF_ARQLEN1_ARQENABLE_SHIFT 31
+#define AVF_ARQLEN1_ARQENABLE_MASK  AVF_MASK(0x1, AVF_ARQLEN1_ARQENABLE_SHIFT)
+#define AVF_ARQT1            0x00007000 /* Reset: EMPR */
+#define AVF_ARQT1_ARQT_SHIFT 0
+#define AVF_ARQT1_ARQT_MASK  AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT)
+#define AVF_ATQBAH1              0x00007800 /* Reset: EMPR */
+#define AVF_ATQBAH1_ATQBAH_SHIFT 0
+#define AVF_ATQBAH1_ATQBAH_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT)
+#define AVF_ATQBAL1              0x00007C00 /* Reset: EMPR */
+#define AVF_ATQBAL1_ATQBAL_SHIFT 0
+#define AVF_ATQBAL1_ATQBAL_MASK  AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT)
+#define AVF_ATQH1            0x00006400 /* Reset: EMPR */
+#define AVF_ATQH1_ATQH_SHIFT 0
+#define AVF_ATQH1_ATQH_MASK  AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT)
+#define AVF_ATQLEN1                 0x00006800 /* Reset: EMPR */
+#define AVF_ATQLEN1_ATQLEN_SHIFT    0
+#define AVF_ATQLEN1_ATQLEN_MASK     AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT)
+#define AVF_ATQLEN1_ATQVFE_SHIFT    28
+#define AVF_ATQLEN1_ATQVFE_MASK     AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT)
+#define AVF_ATQLEN1_ATQOVFL_SHIFT   29
+#define AVF_ATQLEN1_ATQOVFL_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT)
+#define AVF_ATQLEN1_ATQCRIT_SHIFT   30
+#define AVF_ATQLEN1_ATQCRIT_MASK    AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT)
+#define AVF_ATQLEN1_ATQENABLE_SHIFT 31
+#define AVF_ATQLEN1_ATQENABLE_MASK  AVF_MASK(0x1, AVF_ATQLEN1_ATQENABLE_SHIFT)
+#define AVF_ATQT1            0x00008400 /* Reset: EMPR */
+#define AVF_ATQT1_ATQT_SHIFT 0
+#define AVF_ATQT1_ATQT_MASK  AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT)
+#define AVFGEN_RSTAT                 0x00008800 /* Reset: VFR */
+#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0
+#define AVFGEN_RSTAT_VFR_STATE_MASK  AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT)
+#define AVFINT_DYN_CTL01                       0x00005C00 /* Reset: VFR */
+#define AVFINT_DYN_CTL01_INTENA_SHIFT          0
+#define AVFINT_DYN_CTL01_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT)
+#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTL01_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTL01_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTL01_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTL01_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT)
+#define AVFINT_DYN_CTLN1(_INTVF)               (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */
+#define AVFINT_DYN_CTLN1_MAX_INDEX             15
+#define AVFINT_DYN_CTLN1_INTENA_SHIFT          0
+#define AVFINT_DYN_CTLN1_INTENA_MASK           AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT)
+#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT        1
+#define AVFINT_DYN_CTLN1_CLEARPBA_MASK         AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT)
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT      2
+#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT)
+#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT        3
+#define AVFINT_DYN_CTLN1_ITR_INDX_MASK         AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT        5
+#define AVFINT_DYN_CTLN1_INTERVAL_MASK         AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK  AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT)
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT     25
+#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK      AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT)
+#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT      31
+#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK       AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT)
+#define AVFINT_ICR0_ENA1                        0x00005000 /* Reset: CORER */
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT           30
+#define AVFINT_ICR0_ENA1_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT)
+#define AVFINT_ICR0_ENA1_RSVD_SHIFT             31
+#define AVFINT_ICR0_ENA1_RSVD_MASK              AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT)
+#define AVFINT_ICR01                        0x00004800 /* Reset: CORER */
+#define AVFINT_ICR01_INTEVENT_SHIFT         0
+#define AVFINT_ICR01_INTEVENT_MASK          AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT)
+#define AVFINT_ICR01_QUEUE_0_SHIFT          1
+#define AVFINT_ICR01_QUEUE_0_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT)
+#define AVFINT_ICR01_QUEUE_1_SHIFT          2
+#define AVFINT_ICR01_QUEUE_1_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT)
+#define AVFINT_ICR01_QUEUE_2_SHIFT          3
+#define AVFINT_ICR01_QUEUE_2_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT)
+#define AVFINT_ICR01_QUEUE_3_SHIFT          4
+#define AVFINT_ICR01_QUEUE_3_MASK           AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT)
+#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25
+#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK  AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT)
+#define AVFINT_ICR01_ADMINQ_SHIFT           30
+#define AVFINT_ICR01_ADMINQ_MASK            AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT)
+#define AVFINT_ICR01_SWINT_SHIFT            31
+#define AVFINT_ICR01_SWINT_MASK             AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT)
+#define AVFINT_ITR01(_i)            (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */
+#define AVFINT_ITR01_MAX_INDEX      2
+#define AVFINT_ITR01_INTERVAL_SHIFT 0
+#define AVFINT_ITR01_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT)
+#define AVFINT_ITRN1(_i, _INTVF)     (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */
+#define AVFINT_ITRN1_MAX_INDEX      2
+#define AVFINT_ITRN1_INTERVAL_SHIFT 0
+#define AVFINT_ITRN1_INTERVAL_MASK  AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT)
+#define AVFINT_STAT_CTL01                      0x00005400 /* Reset: CORER */
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2
+#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK  AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT)
+#define AVF_QRX_TAIL1(_Q)        (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVF_QRX_TAIL1_MAX_INDEX  15
+#define AVF_QRX_TAIL1_TAIL_SHIFT 0
+#define AVF_QRX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT)
+#define AVF_QTX_TAIL1(_Q)        (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */
+#define AVF_QTX_TAIL1_MAX_INDEX  15
+#define AVF_QTX_TAIL1_TAIL_SHIFT 0
+#define AVF_QTX_TAIL1_TAIL_MASK  AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT)
+#define AVFMSIX_PBA              0x00002000 /* Reset: VFLR */
+#define AVFMSIX_PBA_PENBIT_SHIFT 0
+#define AVFMSIX_PBA_PENBIT_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT)
+#define AVFMSIX_TADD(_i)              (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TADD_MAX_INDEX        16
+#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0
+#define AVFMSIX_TADD_MSIXTADD10_MASK  AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT)
+#define AVFMSIX_TADD_MSIXTADD_SHIFT   2
+#define AVFMSIX_TADD_MSIXTADD_MASK    AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT)
+#define AVFMSIX_TMSG(_i)            (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TMSG_MAX_INDEX      16
+#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0
+#define AVFMSIX_TMSG_MSIXTMSG_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT)
+#define AVFMSIX_TUADD(_i)             (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TUADD_MAX_INDEX       16
+#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0
+#define AVFMSIX_TUADD_MSIXTUADD_MASK  AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT)
+#define AVFMSIX_TVCTRL(_i)        (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */
+#define AVFMSIX_TVCTRL_MAX_INDEX  16
+#define AVFMSIX_TVCTRL_MASK_SHIFT 0
+#define AVFMSIX_TVCTRL_MASK_MASK  AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT)
+#define AVFCM_PE_ERRDATA                  0x0000DC00 /* Reset: VFR */
+#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0
+#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK  AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT     4
+#define AVFCM_PE_ERRDATA_Q_TYPE_MASK      AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT)
+#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT      8
+#define AVFCM_PE_ERRDATA_Q_NUM_MASK       AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT)
+#define AVFCM_PE_ERRINFO                     0x0000D800 /* Reset: VFR */
+#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT   0
+#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK    AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT)
+#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT    4
+#define AVFCM_PE_ERRINFO_ERROR_INST_MASK     AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT)
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8
+#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16
+#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT)
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24
+#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK  AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT)
+#define AVFQF_HENA(_i)             (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */
+#define AVFQF_HENA_MAX_INDEX       1
+#define AVFQF_HENA_PTYPE_ENA_SHIFT 0
+#define AVFQF_HENA_PTYPE_ENA_MASK  AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT)
+#define AVFQF_HKEY(_i)         (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */
+#define AVFQF_HKEY_MAX_INDEX   12
+#define AVFQF_HKEY_KEY_0_SHIFT 0
+#define AVFQF_HKEY_KEY_0_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT)
+#define AVFQF_HKEY_KEY_1_SHIFT 8
+#define AVFQF_HKEY_KEY_1_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT)
+#define AVFQF_HKEY_KEY_2_SHIFT 16
+#define AVFQF_HKEY_KEY_2_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT)
+#define AVFQF_HKEY_KEY_3_SHIFT 24
+#define AVFQF_HKEY_KEY_3_MASK  AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT)
+#define AVFQF_HLUT(_i)        (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */
+#define AVFQF_HLUT_MAX_INDEX  15
+#define AVFQF_HLUT_LUT0_SHIFT 0
+#define AVFQF_HLUT_LUT0_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT)
+#define AVFQF_HLUT_LUT1_SHIFT 8
+#define AVFQF_HLUT_LUT1_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT)
+#define AVFQF_HLUT_LUT2_SHIFT 16
+#define AVFQF_HLUT_LUT2_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT)
+#define AVFQF_HLUT_LUT3_SHIFT 24
+#define AVFQF_HLUT_LUT3_MASK  AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT)
+#define AVFQF_HREGION(_i)                  (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */
+#define AVFQF_HREGION_MAX_INDEX            7
+#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0
+#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT)
+#define AVFQF_HREGION_REGION_0_SHIFT       1
+#define AVFQF_HREGION_REGION_0_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4
+#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT)
+#define AVFQF_HREGION_REGION_1_SHIFT       5
+#define AVFQF_HREGION_REGION_1_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8
+#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT)
+#define AVFQF_HREGION_REGION_2_SHIFT       9
+#define AVFQF_HREGION_REGION_2_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12
+#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT)
+#define AVFQF_HREGION_REGION_3_SHIFT       13
+#define AVFQF_HREGION_REGION_3_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16
+#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT)
+#define AVFQF_HREGION_REGION_4_SHIFT       17
+#define AVFQF_HREGION_REGION_4_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20
+#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT)
+#define AVFQF_HREGION_REGION_5_SHIFT       21
+#define AVFQF_HREGION_REGION_5_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24
+#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT)
+#define AVFQF_HREGION_REGION_6_SHIFT       25
+#define AVFQF_HREGION_REGION_6_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT)
+#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28
+#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK  AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT)
+#define AVFQF_HREGION_REGION_7_SHIFT       29
+#define AVFQF_HREGION_REGION_7_MASK        AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT)
+
+#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT)
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT       30
+#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK        AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT)
+#define AVFPE_AEQALLOC1               0x0000A400 /* Reset: VFR */
+#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0
+#define AVFPE_AEQALLOC1_AECOUNT_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT)
+#define AVFPE_CCQPHIGH1                  0x00009800 /* Reset: VFR */
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0
+#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT)
+#define AVFPE_CCQPLOW1                 0x0000AC00 /* Reset: VFR */
+#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0
+#define AVFPE_CCQPLOW1_PECCQPLOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT)
+#define AVFPE_CCQPSTATUS1                   0x0000B800 /* Reset: VFR */
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT   0
+#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK    AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT)
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4
+#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK  AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT)
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16
+#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK  AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT)
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT    31
+#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK     AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT)
+#define AVFPE_CQACK1              0x0000B000 /* Reset: VFR */
+#define AVFPE_CQACK1_PECQID_SHIFT 0
+#define AVFPE_CQACK1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT)
+#define AVFPE_CQARM1              0x0000B400 /* Reset: VFR */
+#define AVFPE_CQARM1_PECQID_SHIFT 0
+#define AVFPE_CQARM1_PECQID_MASK  AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT)
+#define AVFPE_CQPDB1              0x0000BC00 /* Reset: VFR */
+#define AVFPE_CQPDB1_WQHEAD_SHIFT 0
+#define AVFPE_CQPDB1_WQHEAD_MASK  AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT)
+#define AVFPE_CQPERRCODES1                      0x00009C00 /* Reset: VFR */
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0
+#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT)
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16
+#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK  AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT)
+#define AVFPE_CQPTAIL1                  0x0000A000 /* Reset: VFR */
+#define AVFPE_CQPTAIL1_WQTAIL_SHIFT     0
+#define AVFPE_CQPTAIL1_WQTAIL_MASK      AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT)
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31
+#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK  AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT)
+#define AVFPE_IPCONFIG01                        0x00008C00 /* Reset: VFR */
+#define AVFPE_IPCONFIG01_PEIPID_SHIFT           0
+#define AVFPE_IPCONFIG01_PEIPID_MASK            AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT)
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16
+#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK  AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT)
+#define AVFPE_MRTEIDXMASK1                       0x00009000 /* Reset: VFR */
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0
+#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK  AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT)
+#define AVFPE_RCVUNEXPECTEDERROR1                        0x00009400 /* Reset: VFR */
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0
+#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK  AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT)
+#define AVFPE_TCPNOWTIMER1               0x0000A800 /* Reset: VFR */
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0
+#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK  AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT)
+#define AVFPE_WQEALLOC1                      0x0000C000 /* Reset: VFR */
+#define AVFPE_WQEALLOC1_PEQPID_SHIFT         0
+#define AVFPE_WQEALLOC1_PEQPID_MASK          AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT)
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20
+#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK  AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT)
+
+#endif /* _AVF_REGISTER_H_ */
diff --git a/drivers/net/avf/base/avf_status.h b/drivers/net/avf/base/avf_status.h
new file mode 100644
index 0000000..e8a673b
--- /dev/null
+++ b/drivers/net/avf/base/avf_status.h
@@ -0,0 +1,108 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_STATUS_H_
+#define _AVF_STATUS_H_
+
+/* Error Codes */
+enum avf_status_code {
+	AVF_SUCCESS				= 0,
+	AVF_ERR_NVM				= -1,
+	AVF_ERR_NVM_CHECKSUM			= -2,
+	AVF_ERR_PHY				= -3,
+	AVF_ERR_CONFIG				= -4,
+	AVF_ERR_PARAM				= -5,
+	AVF_ERR_MAC_TYPE			= -6,
+	AVF_ERR_UNKNOWN_PHY			= -7,
+	AVF_ERR_LINK_SETUP			= -8,
+	AVF_ERR_ADAPTER_STOPPED		= -9,
+	AVF_ERR_INVALID_MAC_ADDR		= -10,
+	AVF_ERR_DEVICE_NOT_SUPPORTED		= -11,
+	AVF_ERR_MASTER_REQUESTS_PENDING	= -12,
+	AVF_ERR_INVALID_LINK_SETTINGS		= -13,
+	AVF_ERR_AUTONEG_NOT_COMPLETE		= -14,
+	AVF_ERR_RESET_FAILED			= -15,
+	AVF_ERR_SWFW_SYNC			= -16,
+	AVF_ERR_NO_AVAILABLE_VSI		= -17,
+	AVF_ERR_NO_MEMORY			= -18,
+	AVF_ERR_BAD_PTR			= -19,
+	AVF_ERR_RING_FULL			= -20,
+	AVF_ERR_INVALID_PD_ID			= -21,
+	AVF_ERR_INVALID_QP_ID			= -22,
+	AVF_ERR_INVALID_CQ_ID			= -23,
+	AVF_ERR_INVALID_CEQ_ID			= -24,
+	AVF_ERR_INVALID_AEQ_ID			= -25,
+	AVF_ERR_INVALID_SIZE			= -26,
+	AVF_ERR_INVALID_ARP_INDEX		= -27,
+	AVF_ERR_INVALID_FPM_FUNC_ID		= -28,
+	AVF_ERR_QP_INVALID_MSG_SIZE		= -29,
+	AVF_ERR_QP_TOOMANY_WRS_POSTED		= -30,
+	AVF_ERR_INVALID_FRAG_COUNT		= -31,
+	AVF_ERR_QUEUE_EMPTY			= -32,
+	AVF_ERR_INVALID_ALIGNMENT		= -33,
+	AVF_ERR_FLUSHED_QUEUE			= -34,
+	AVF_ERR_INVALID_PUSH_PAGE_INDEX	= -35,
+	AVF_ERR_INVALID_IMM_DATA_SIZE		= -36,
+	AVF_ERR_TIMEOUT			= -37,
+	AVF_ERR_OPCODE_MISMATCH		= -38,
+	AVF_ERR_CQP_COMPL_ERROR		= -39,
+	AVF_ERR_INVALID_VF_ID			= -40,
+	AVF_ERR_INVALID_HMCFN_ID		= -41,
+	AVF_ERR_BACKING_PAGE_ERROR		= -42,
+	AVF_ERR_NO_PBLCHUNKS_AVAILABLE		= -43,
+	AVF_ERR_INVALID_PBLE_INDEX		= -44,
+	AVF_ERR_INVALID_SD_INDEX		= -45,
+	AVF_ERR_INVALID_PAGE_DESC_INDEX	= -46,
+	AVF_ERR_INVALID_SD_TYPE		= -47,
+	AVF_ERR_MEMCPY_FAILED			= -48,
+	AVF_ERR_INVALID_HMC_OBJ_INDEX		= -49,
+	AVF_ERR_INVALID_HMC_OBJ_COUNT		= -50,
+	AVF_ERR_INVALID_SRQ_ARM_LIMIT		= -51,
+	AVF_ERR_SRQ_ENABLED			= -52,
+	AVF_ERR_ADMIN_QUEUE_ERROR		= -53,
+	AVF_ERR_ADMIN_QUEUE_TIMEOUT		= -54,
+	AVF_ERR_BUF_TOO_SHORT			= -55,
+	AVF_ERR_ADMIN_QUEUE_FULL		= -56,
+	AVF_ERR_ADMIN_QUEUE_NO_WORK		= -57,
+	AVF_ERR_BAD_IWARP_CQE			= -58,
+	AVF_ERR_NVM_BLANK_MODE			= -59,
+	AVF_ERR_NOT_IMPLEMENTED		= -60,
+	AVF_ERR_PE_DOORBELL_NOT_ENABLED	= -61,
+	AVF_ERR_DIAG_TEST_FAILED		= -62,
+	AVF_ERR_NOT_READY			= -63,
+	AVF_NOT_SUPPORTED			= -64,
+	AVF_ERR_FIRMWARE_API_VERSION		= -65,
+	AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR	= -66,
+};
+
+#endif /* _AVF_STATUS_H_ */
diff --git a/drivers/net/avf/base/avf_type.h b/drivers/net/avf/base/avf_type.h
new file mode 100644
index 0000000..546c6d2
--- /dev/null
+++ b/drivers/net/avf/base/avf_type.h
@@ -0,0 +1,2024 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _AVF_TYPE_H_
+#define _AVF_TYPE_H_
+
+#include "avf_status.h"
+#include "avf_osdep.h"
+#include "avf_register.h"
+#include "avf_adminq.h"
+#include "avf_hmc.h"
+#include "avf_lan_hmc.h"
+#include "avf_devids.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#ifndef LINUX_MACROS
+#ifndef BIT
+#define BIT(a) (1UL << (a))
+#endif /* BIT */
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif /* BIT_ULL */
+#endif /* LINUX_MACROS */
+
+#ifndef AVF_MASK
+/* AVF_MASK is a macro used on 32 bit registers */
+#define AVF_MASK(mask, shift) (mask << shift)
+#endif
+
+#define AVF_MAX_PF			16
+#define AVF_MAX_PF_VSI			64
+#define AVF_MAX_PF_QP			128
+#define AVF_MAX_VSI_QP			16
+#define AVF_MAX_VF_VSI			3
+#define AVF_MAX_CHAINED_RX_BUFFERS	5
+#define AVF_MAX_PF_UDP_OFFLOAD_PORTS	16
+
+/* something less than 1 minute */
+#define AVF_HEARTBEAT_TIMEOUT		(HZ * 50)
+
+/* Max default timeout in ms, */
+#define AVF_MAX_NVM_TIMEOUT		18000
+
+/* Max timeout in ms for the phy to respond */
+#define AVF_MAX_PHY_TIMEOUT		500
+
+/* Check whether address is multicast. */
+#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define AVF_IS_BROADCAST(address)	\
+	((((u8 *)(address))[0] == ((u8)0xff)) && \
+	(((u8 *)(address))[1] == ((u8)0xff)))
+
+/* Switch from ms to the 1usec global time (this is the GTIME resolution) */
+#define AVF_MS_TO_GTIME(time)		((time) * 1000)
+
+/* forward declaration */
+struct avf_hw;
+typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *);
+
+#ifndef ETH_ALEN
+#define ETH_ALEN	6
+#endif
+/* Data type manipulation macros. */
+#define AVF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define AVF_LO_DWORD(x)	((u32)((x) & 0xFFFFFFFF))
+
+#define AVF_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
+#define AVF_LO_WORD(x)		((u16)((x) & 0xFFFF))
+
+#define AVF_HI_BYTE(x)		((u8)(((x) >> 8) & 0xFF))
+#define AVF_LO_BYTE(x)		((u8)((x) & 0xFF))
+
+/* Number of Transmit Descriptors must be a multiple of 8. */
+#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE	8
+/* Number of Receive Descriptors must be a multiple of 32 if
+ * the number of descriptors is greater than 32.
+ */
+#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE	32
+
+#define AVF_DESC_UNUSED(R)	\
+	((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \
+	(R)->next_to_clean - (R)->next_to_use - 1)
+
+/* bitfields for Tx queue mapping in QTX_CTL */
+#define AVF_QTX_CTL_VF_QUEUE	0x0
+#define AVF_QTX_CTL_VM_QUEUE	0x1
+#define AVF_QTX_CTL_PF_QUEUE	0x2
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum avf_debug_mask {
+	AVF_DEBUG_INIT			= 0x00000001,
+	AVF_DEBUG_RELEASE		= 0x00000002,
+
+	AVF_DEBUG_LINK			= 0x00000010,
+	AVF_DEBUG_PHY			= 0x00000020,
+	AVF_DEBUG_HMC			= 0x00000040,
+	AVF_DEBUG_NVM			= 0x00000080,
+	AVF_DEBUG_LAN			= 0x00000100,
+	AVF_DEBUG_FLOW			= 0x00000200,
+	AVF_DEBUG_DCB			= 0x00000400,
+	AVF_DEBUG_DIAG			= 0x00000800,
+	AVF_DEBUG_FD			= 0x00001000,
+	AVF_DEBUG_PACKAGE		= 0x00002000,
+
+	AVF_DEBUG_AQ_MESSAGE		= 0x01000000,
+	AVF_DEBUG_AQ_DESCRIPTOR	= 0x02000000,
+	AVF_DEBUG_AQ_DESC_BUFFER	= 0x04000000,
+	AVF_DEBUG_AQ_COMMAND		= 0x06000000,
+	AVF_DEBUG_AQ			= 0x0F000000,
+
+	AVF_DEBUG_USER			= 0xF0000000,
+
+	AVF_DEBUG_ALL			= 0xFFFFFFFF
+};
+
+/* PCI Bus Info */
+#define AVF_PCI_LINK_STATUS		0xB2
+#define AVF_PCI_LINK_WIDTH		0x3F0
+#define AVF_PCI_LINK_WIDTH_1		0x10
+#define AVF_PCI_LINK_WIDTH_2		0x20
+#define AVF_PCI_LINK_WIDTH_4		0x40
+#define AVF_PCI_LINK_WIDTH_8		0x80
+#define AVF_PCI_LINK_SPEED		0xF
+#define AVF_PCI_LINK_SPEED_2500	0x1
+#define AVF_PCI_LINK_SPEED_5000	0x2
+#define AVF_PCI_LINK_SPEED_8000	0x3
+
+#define AVF_MDIO_CLAUSE22_STCODE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_MDIO_CLAUSE45_STCODE_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_STCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK	AVF_MASK(0, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK	AVF_MASK(1, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK	AVF_MASK(2, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK	AVF_MASK(3, \
+						  AVF_GLGEN_MSCA_OPCODE_SHIFT)
+
+#define AVF_PHY_COM_REG_PAGE			0x1E
+#define AVF_PHY_LED_LINK_MODE_MASK		0xF0
+#define AVF_PHY_LED_MANUAL_ON			0x100
+#define AVF_PHY_LED_PROV_REG_1			0xC430
+#define AVF_PHY_LED_MODE_MASK			0xFFFF
+#define AVF_PHY_LED_MODE_ORIG			0x80000000
+
+/* Memory types */
+enum avf_memset_type {
+	AVF_NONDMA_MEM = 0,
+	AVF_DMA_MEM
+};
+
+/* Memcpy types */
+enum avf_memcpy_type {
+	AVF_NONDMA_TO_NONDMA = 0,
+	AVF_NONDMA_TO_DMA,
+	AVF_DMA_TO_DMA,
+	AVF_DMA_TO_NONDMA
+};
+
+/* These are structs for managing the hardware information and the operations.
+ * The structures of function pointers are filled out at init time when we
+ * know for sure exactly which hardware we're working with.  This gives us the
+ * flexibility of using the same main driver code but adapting to slightly
+ * different hardware needs as new parts are developed.  For this architecture,
+ * the Firmware and AdminQ are intended to insulate the driver from most of the
+ * future changes, but these structures will also do part of the job.
+ */
+enum avf_mac_type {
+	AVF_MAC_UNKNOWN = 0,
+	AVF_MAC_XL710,
+	AVF_MAC_VF,
+	AVF_MAC_X722,
+	AVF_MAC_X722_VF,
+	AVF_MAC_GENERIC,
+};
+
+enum avf_media_type {
+	AVF_MEDIA_TYPE_UNKNOWN = 0,
+	AVF_MEDIA_TYPE_FIBER,
+	AVF_MEDIA_TYPE_BASET,
+	AVF_MEDIA_TYPE_BACKPLANE,
+	AVF_MEDIA_TYPE_CX4,
+	AVF_MEDIA_TYPE_DA,
+	AVF_MEDIA_TYPE_VIRTUAL
+};
+
+enum avf_fc_mode {
+	AVF_FC_NONE = 0,
+	AVF_FC_RX_PAUSE,
+	AVF_FC_TX_PAUSE,
+	AVF_FC_FULL,
+	AVF_FC_PFC,
+	AVF_FC_DEFAULT
+};
+
+enum avf_set_fc_aq_failures {
+	AVF_SET_FC_AQ_FAIL_NONE = 0,
+	AVF_SET_FC_AQ_FAIL_GET = 1,
+	AVF_SET_FC_AQ_FAIL_SET = 2,
+	AVF_SET_FC_AQ_FAIL_UPDATE = 4,
+	AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6
+};
+
+enum avf_vsi_type {
+	AVF_VSI_MAIN	= 0,
+	AVF_VSI_VMDQ1	= 1,
+	AVF_VSI_VMDQ2	= 2,
+	AVF_VSI_CTRL	= 3,
+	AVF_VSI_FCOE	= 4,
+	AVF_VSI_MIRROR	= 5,
+	AVF_VSI_SRIOV	= 6,
+	AVF_VSI_FDIR	= 7,
+	AVF_VSI_TYPE_UNKNOWN
+};
+
+enum avf_queue_type {
+	AVF_QUEUE_TYPE_RX = 0,
+	AVF_QUEUE_TYPE_TX,
+	AVF_QUEUE_TYPE_PE_CEQ,
+	AVF_QUEUE_TYPE_UNKNOWN
+};
+
+struct avf_link_status {
+	enum avf_aq_phy_type phy_type;
+	enum avf_aq_link_speed link_speed;
+	u8 link_info;
+	u8 an_info;
+	u8 req_fec_info;
+	u8 fec_info;
+	u8 ext_info;
+	u8 loopback;
+	/* is Link Status Event notification to SW enabled */
+	bool lse_enable;
+	u16 max_frame_size;
+	bool crc_enable;
+	u8 pacing;
+	u8 requested_speeds;
+	u8 module_type[3];
+	/* 1st byte: module identifier */
+#define AVF_MODULE_TYPE_SFP		0x03
+#define AVF_MODULE_TYPE_QSFP		0x0D
+	/* 2nd byte: ethernet compliance codes for 10/40G */
+#define AVF_MODULE_TYPE_40G_ACTIVE	0x01
+#define AVF_MODULE_TYPE_40G_LR4	0x02
+#define AVF_MODULE_TYPE_40G_SR4	0x04
+#define AVF_MODULE_TYPE_40G_CR4	0x08
+#define AVF_MODULE_TYPE_10G_BASE_SR	0x10
+#define AVF_MODULE_TYPE_10G_BASE_LR	0x20
+#define AVF_MODULE_TYPE_10G_BASE_LRM	0x40
+#define AVF_MODULE_TYPE_10G_BASE_ER	0x80
+	/* 3rd byte: ethernet compliance codes for 1G */
+#define AVF_MODULE_TYPE_1000BASE_SX	0x01
+#define AVF_MODULE_TYPE_1000BASE_LX	0x02
+#define AVF_MODULE_TYPE_1000BASE_CX	0x04
+#define AVF_MODULE_TYPE_1000BASE_T	0x08
+};
+
+struct avf_phy_info {
+	struct avf_link_status link_info;
+	struct avf_link_status link_info_old;
+	bool get_link_info;
+	enum avf_media_type media_type;
+	/* all the phy types the NVM is capable of */
+	u64 phy_types;
+};
+
+#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII)
+#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX)
+#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4)
+#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR)
+#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4)
+#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI)
+#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI)
+#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI)
+#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI)
+#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC)
+#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T)
+#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR)
+#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR)
+#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU)
+#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1)
+#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4)
+#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4)
+#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX)
+#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX)
+#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \
+				BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL)
+#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2)
+/*
+ * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some
+ * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit
+ * fields but no corresponding gap in the avf_aq_phy_type enumeration. So,
+ * a shift is needed to adjust for this with values larger than 31. The
+ * only affected values are AVF_PHY_TYPE_25GBASE_*.
+ */
+#define AVF_PHY_TYPE_OFFSET 1
+#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(AVF_PHY_TYPE_25GBASE_AOC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(AVF_PHY_TYPE_25GBASE_ACC + \
+					     AVF_PHY_TYPE_OFFSET)
+#define AVF_HW_CAP_MAX_GPIO			30
+#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO		0
+#define AVF_HW_CAP_MDIO_PORT_MODE_I2C		1
+
+enum avf_acpi_programming_method {
+	AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0,
+	AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1
+};
+
+#define AVF_WOL_SUPPORT_MASK			0x1
+#define AVF_ACPI_PROGRAMMING_METHOD_MASK	0x2
+#define AVF_PROXY_SUPPORT_MASK			0x4
+
+/* Capabilities of a PF or a VF or the whole device */
+struct avf_hw_capabilities {
+	u32  switch_mode;
+#define AVF_NVM_IMAGE_TYPE_EVB		0x0
+#define AVF_NVM_IMAGE_TYPE_CLOUD	0x2
+#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD	0x3
+
+	u32  management_mode;
+	u32  mng_protocols_over_mctp;
+#define AVF_MNG_PROTOCOL_PLDM		0x2
+#define AVF_MNG_PROTOCOL_OEM_COMMANDS	0x4
+#define AVF_MNG_PROTOCOL_NCSI		0x8
+	u32  npar_enable;
+	u32  os2bmc;
+	u32  valid_functions;
+	bool sr_iov_1_1;
+	bool vmdq;
+	bool evb_802_1_qbg; /* Edge Virtual Bridging */
+	bool evb_802_1_qbh; /* Bridge Port Extension */
+	bool dcb;
+	bool fcoe;
+	bool iscsi; /* Indicates iSCSI enabled */
+	bool flex10_enable;
+	bool flex10_capable;
+	u32  flex10_mode;
+#define AVF_FLEX10_MODE_UNKNOWN	0x0
+#define AVF_FLEX10_MODE_DCC		0x1
+#define AVF_FLEX10_MODE_DCI		0x2
+
+	u32 flex10_status;
+#define AVF_FLEX10_STATUS_DCC_ERROR	0x1
+#define AVF_FLEX10_STATUS_VC_MODE	0x2
+
+	bool sec_rev_disabled;
+	bool update_disabled;
+#define AVF_NVM_MGMT_SEC_REV_DISABLED	0x1
+#define AVF_NVM_MGMT_UPDATE_DISABLED	0x2
+
+	bool mgmt_cem;
+	bool ieee_1588;
+	bool iwarp;
+	bool fd;
+	u32 fd_filters_guaranteed;
+	u32 fd_filters_best_effort;
+	bool rss;
+	u32 rss_table_size;
+	u32 rss_table_entry_width;
+	bool led[AVF_HW_CAP_MAX_GPIO];
+	bool sdp[AVF_HW_CAP_MAX_GPIO];
+	u32 nvm_image_type;
+	u32 num_flow_director_filters;
+	u32 num_vfs;
+	u32 vf_base_id;
+	u32 num_vsis;
+	u32 num_rx_qp;
+	u32 num_tx_qp;
+	u32 base_queue;
+	u32 num_msix_vectors;
+	u32 num_msix_vectors_vf;
+	u32 led_pin_num;
+	u32 sdp_pin_num;
+	u32 mdio_port_num;
+	u32 mdio_port_mode;
+	u8 rx_buf_chain_len;
+	u32 enabled_tcmap;
+	u32 maxtc;
+	u64 wr_csr_prot;
+	bool apm_wol_support;
+	enum avf_acpi_programming_method acpi_prog_method;
+	bool proxy_support;
+};
+
+struct avf_mac_info {
+	enum avf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+	u8 san_addr[ETH_ALEN];
+	u8 port_addr[ETH_ALEN];
+	u16 max_fcoeq;
+};
+
+enum avf_aq_resources_ids {
+	AVF_NVM_RESOURCE_ID = 1
+};
+
+enum avf_aq_resource_access_type {
+	AVF_RESOURCE_READ = 1,
+	AVF_RESOURCE_WRITE
+};
+
+struct avf_nvm_info {
+	u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */
+	u32 timeout;              /* [ms] */
+	u16 sr_size;              /* Shadow RAM size in words */
+	bool blank_nvm_mode;      /* is NVM empty (no FW present)*/
+	u16 version;              /* NVM package version */
+	u32 eetrack;              /* NVM data version */
+	u32 oem_ver;              /* OEM version info */
+};
+
+/* definitions used in NVM update support */
+
+enum avf_nvmupd_cmd {
+	AVF_NVMUPD_INVALID,
+	AVF_NVMUPD_READ_CON,
+	AVF_NVMUPD_READ_SNT,
+	AVF_NVMUPD_READ_LCB,
+	AVF_NVMUPD_READ_SA,
+	AVF_NVMUPD_WRITE_ERA,
+	AVF_NVMUPD_WRITE_CON,
+	AVF_NVMUPD_WRITE_SNT,
+	AVF_NVMUPD_WRITE_LCB,
+	AVF_NVMUPD_WRITE_SA,
+	AVF_NVMUPD_CSUM_CON,
+	AVF_NVMUPD_CSUM_SA,
+	AVF_NVMUPD_CSUM_LCB,
+	AVF_NVMUPD_STATUS,
+	AVF_NVMUPD_EXEC_AQ,
+	AVF_NVMUPD_GET_AQ_RESULT,
+	AVF_NVMUPD_GET_AQ_EVENT,
+};
+
+enum avf_nvmupd_state {
+	AVF_NVMUPD_STATE_INIT,
+	AVF_NVMUPD_STATE_READING,
+	AVF_NVMUPD_STATE_WRITING,
+	AVF_NVMUPD_STATE_INIT_WAIT,
+	AVF_NVMUPD_STATE_WRITE_WAIT,
+	AVF_NVMUPD_STATE_ERROR
+};
+
+/* nvm_access definition and its masks/shifts need to be accessible to
+ * application, core driver, and shared code.  Where is the right file?
+ */
+#define AVF_NVM_READ	0xB
+#define AVF_NVM_WRITE	0xC
+
+#define AVF_NVM_MOD_PNT_MASK 0xFF
+
+#define AVF_NVM_TRANS_SHIFT			8
+#define AVF_NVM_TRANS_MASK			(0xf << AVF_NVM_TRANS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SHIFT	12
+#define AVF_NVM_PRESERVATION_FLAGS_MASK \
+				(0x3 << AVF_NVM_PRESERVATION_FLAGS_SHIFT)
+#define AVF_NVM_PRESERVATION_FLAGS_SELECTED	0x01
+#define AVF_NVM_PRESERVATION_FLAGS_ALL		0x02
+#define AVF_NVM_CON				0x0
+#define AVF_NVM_SNT				0x1
+#define AVF_NVM_LCB				0x2
+#define AVF_NVM_SA				(AVF_NVM_SNT | AVF_NVM_LCB)
+#define AVF_NVM_ERA				0x4
+#define AVF_NVM_CSUM				0x8
+#define AVF_NVM_AQE				0xe
+#define AVF_NVM_EXEC				0xf
+
+#define AVF_NVM_ADAPT_SHIFT	16
+#define AVF_NVM_ADAPT_MASK	(0xffffULL << AVF_NVM_ADAPT_SHIFT)
+
+#define AVF_NVMUPD_MAX_DATA	4096
+#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */
+
+struct avf_nvm_access {
+	u32 command;
+	u32 config;
+	u32 offset;	/* in bytes */
+	u32 data_size;	/* in bytes */
+	u8 data[1];
+};
+
+/* (Q)SFP module access definitions */
+#define AVF_I2C_EEPROM_DEV_ADDR	0xA0
+#define AVF_I2C_EEPROM_DEV_ADDR2	0xA2
+#define AVF_MODULE_TYPE_ADDR		0x00
+#define AVF_MODULE_REVISION_ADDR	0x01
+#define AVF_MODULE_SFF_8472_COMP	0x5E
+#define AVF_MODULE_SFF_8472_SWAP	0x5C
+#define AVF_MODULE_SFF_ADDR_MODE	0x04
+#define AVF_MODULE_SFF_DIAG_CAPAB	0x40
+#define AVF_MODULE_TYPE_QSFP_PLUS	0x0D
+#define AVF_MODULE_TYPE_QSFP28		0x11
+#define AVF_MODULE_QSFP_MAX_LEN	640
+
+/* PCI bus types */
+enum avf_bus_type {
+	avf_bus_type_unknown = 0,
+	avf_bus_type_pci,
+	avf_bus_type_pcix,
+	avf_bus_type_pci_express,
+	avf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum avf_bus_speed {
+	avf_bus_speed_unknown	= 0,
+	avf_bus_speed_33	= 33,
+	avf_bus_speed_66	= 66,
+	avf_bus_speed_100	= 100,
+	avf_bus_speed_120	= 120,
+	avf_bus_speed_133	= 133,
+	avf_bus_speed_2500	= 2500,
+	avf_bus_speed_5000	= 5000,
+	avf_bus_speed_8000	= 8000,
+	avf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum avf_bus_width {
+	avf_bus_width_unknown	= 0,
+	avf_bus_width_pcie_x1	= 1,
+	avf_bus_width_pcie_x2	= 2,
+	avf_bus_width_pcie_x4	= 4,
+	avf_bus_width_pcie_x8	= 8,
+	avf_bus_width_32	= 32,
+	avf_bus_width_64	= 64,
+	avf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct avf_bus_info {
+	enum avf_bus_speed speed;
+	enum avf_bus_width width;
+	enum avf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Flow control (FC) parameters */
+struct avf_fc_info {
+	enum avf_fc_mode current_mode; /* FC mode in effect */
+	enum avf_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
+#define AVF_MAX_TRAFFIC_CLASS		8
+#define AVF_MAX_USER_PRIORITY		8
+#define AVF_DCBX_MAX_APPS		32
+#define AVF_LLDPDU_SIZE		1500
+#define AVF_TLV_STATUS_OPER		0x1
+#define AVF_TLV_STATUS_SYNC		0x2
+#define AVF_TLV_STATUS_ERR		0x4
+#define AVF_CEE_OPER_MAX_APPS		3
+#define AVF_APP_PROTOID_FCOE		0x8906
+#define AVF_APP_PROTOID_ISCSI		0x0cbc
+#define AVF_APP_PROTOID_FIP		0x8914
+#define AVF_APP_SEL_ETHTYPE		0x1
+#define AVF_APP_SEL_TCPIP		0x2
+#define AVF_CEE_APP_SEL_ETHTYPE	0x0
+#define AVF_CEE_APP_SEL_TCPIP		0x1
+
+/* CEE or IEEE 802.1Qaz ETS Configuration data */
+struct avf_dcb_ets_config {
+	u8 willing;
+	u8 cbs;
+	u8 maxtcs;
+	u8 prioritytable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS];
+	u8 tsatable[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* CEE or IEEE 802.1Qaz PFC Configuration data */
+struct avf_dcb_pfc_config {
+	u8 willing;
+	u8 mbc;
+	u8 pfccap;
+	u8 pfcenable;
+};
+
+/* CEE or IEEE 802.1Qaz Application Priority data */
+struct avf_dcb_app_priority_table {
+	u8  priority;
+	u8  selector;
+	u16 protocolid;
+};
+
+struct avf_dcbx_config {
+	u8  dcbx_mode;
+#define AVF_DCBX_MODE_CEE	0x1
+#define AVF_DCBX_MODE_IEEE	0x2
+	u8  app_mode;
+#define AVF_DCBX_APPS_NON_WILLING	0x1
+	u32 numapps;
+	u32 tlv_status; /* CEE mode TLV status */
+	struct avf_dcb_ets_config etscfg;
+	struct avf_dcb_ets_config etsrec;
+	struct avf_dcb_pfc_config pfc;
+	struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS];
+};
+
+/* Port hardware description */
+struct avf_hw {
+	u8 *hw_addr;
+	void *back;
+
+	/* subsystem structs */
+	struct avf_phy_info phy;
+	struct avf_mac_info mac;
+	struct avf_bus_info bus;
+	struct avf_nvm_info nvm;
+	struct avf_fc_info fc;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	u8 port;
+	bool adapter_stopped;
+
+	/* capabilities for entire device and PCI func */
+	struct avf_hw_capabilities dev_caps;
+	struct avf_hw_capabilities func_caps;
+
+	/* Flow Director shared filter space */
+	u16 fdir_shared_filter_count;
+
+	/* device profile info */
+	u8  pf_id;
+	u16 main_vsi_seid;
+
+	/* for multi-function MACs */
+	u16 partition_id;
+	u16 num_partitions;
+	u16 num_ports;
+
+	/* Closest numa node to the device */
+	u16 numa_node;
+
+	/* Admin Queue info */
+	struct avf_adminq_info aq;
+
+	/* state of nvm update process */
+	enum avf_nvmupd_state nvmupd_state;
+	struct avf_aq_desc nvm_wb_desc;
+	struct avf_aq_desc nvm_aq_event_desc;
+	struct avf_virt_mem nvm_buff;
+	bool nvm_release_on_done;
+	u16 nvm_wait_opcode;
+
+	/* HMC info */
+	struct avf_hmc_info hmc; /* HMC info struct */
+
+	/* LLDP/DCBX Status */
+	u16 dcbx_status;
+
+	/* DCBX info */
+	struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */
+	struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */
+	struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */
+
+	/* WoL and proxy support */
+	u16 num_wol_proxy_filters;
+	u16 wol_proxy_vsi_seid;
+
+#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0)
+#define AVF_HW_FLAG_802_1AD_CAPABLE        BIT_ULL(1)
+#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE  BIT_ULL(2)
+#define AVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3)
+	u64 flags;
+
+	/* Used in set switch config AQ command */
+	u16 switch_tag;
+	u16 first_tag;
+	u16 second_tag;
+
+	/* debug mask */
+	u32 debug_mask;
+	char err_str[16];
+};
+
+STATIC INLINE bool avf_is_vf(struct avf_hw *hw)
+{
+	return (hw->mac.type == AVF_MAC_VF ||
+		hw->mac.type == AVF_MAC_X722_VF);
+}
+
+struct avf_driver_version {
+	u8 major_version;
+	u8 minor_version;
+	u8 build_version;
+	u8 subbuild_version;
+	u8 driver_string[32];
+};
+
+/* RX Descriptors */
+union avf_16byte_rx_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fd_id; /* Flow director filter id */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+			} hi_dword;
+		} qword0;
+		struct {
+			/* ext status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+	} wb;  /* writeback */
+};
+
+union avf_32byte_rx_desc {
+	struct {
+		__le64  pkt_addr; /* Packet buffer address */
+		__le64  hdr_addr; /* Header buffer address */
+			/* bit 0 of hdr_buffer_addr is DD bit */
+		__le64  rsvd1;
+		__le64  rsvd2;
+	} read;
+	struct {
+		struct {
+			struct {
+				union {
+					__le16 mirroring_status;
+					__le16 fcoe_ctx_id;
+				} mirr_fcoe;
+				__le16 l2tag1;
+			} lo_dword;
+			union {
+				__le32 rss; /* RSS Hash */
+				__le32 fcoe_param; /* FCoE DDP Context id */
+				/* Flow director filter id in case of
+				 * Programming status desc WB
+				 */
+				__le32 fd_id;
+			} hi_dword;
+		} qword0;
+		struct {
+			/* status/error/pktype/length */
+			__le64 status_error_len;
+		} qword1;
+		struct {
+			__le16 ext_status; /* extended status */
+			__le16 rsvd;
+			__le16 l2tag2_1;
+			__le16 l2tag2_2;
+		} qword2;
+		struct {
+			union {
+				__le32 flex_bytes_lo;
+				__le32 pe_status;
+			} lo_dword;
+			union {
+				__le32 flex_bytes_hi;
+				__le32 fd_id;
+			} hi_dword;
+		} qword3;
+	} wb;  /* writeback */
+};
+
+#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT	8
+#define AVF_RXD_QW0_MIRROR_STATUS_MASK	(0x3FUL << \
+					 AVF_RXD_QW0_MIRROR_STATUS_SHIFT)
+#define AVF_RXD_QW0_FCOEINDX_SHIFT	0
+#define AVF_RXD_QW0_FCOEINDX_MASK	(0xFFFUL << \
+					 AVF_RXD_QW0_FCOEINDX_SHIFT)
+
+enum avf_rx_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_STATUS_DD_SHIFT		= 0,
+	AVF_RX_DESC_STATUS_EOF_SHIFT		= 1,
+	AVF_RX_DESC_STATUS_L2TAG1P_SHIFT	= 2,
+	AVF_RX_DESC_STATUS_L3L4P_SHIFT		= 3,
+	AVF_RX_DESC_STATUS_CRCP_SHIFT		= 4,
+	AVF_RX_DESC_STATUS_TSYNINDX_SHIFT	= 5, /* 2 BITS */
+	AVF_RX_DESC_STATUS_TSYNVALID_SHIFT	= 7,
+	AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT	= 8,
+
+	AVF_RX_DESC_STATUS_UMBCAST_SHIFT	= 9, /* 2 BITS */
+	AVF_RX_DESC_STATUS_FLM_SHIFT		= 11,
+	AVF_RX_DESC_STATUS_FLTSTAT_SHIFT	= 12, /* 2 BITS */
+	AVF_RX_DESC_STATUS_LPBK_SHIFT		= 14,
+	AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT	= 15,
+	AVF_RX_DESC_STATUS_RESERVED2_SHIFT	= 16, /* 2 BITS */
+	AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT	= 18,
+	AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */
+};
+
+#define AVF_RXD_QW1_STATUS_SHIFT	0
+#define AVF_RXD_QW1_STATUS_MASK	((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \
+					 AVF_RXD_QW1_STATUS_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT   AVF_RX_DESC_STATUS_TSYNINDX_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK	(0x3UL << \
+					     AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT  AVF_RX_DESC_STATUS_TSYNVALID_SHIFT
+#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK   BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT)
+
+#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT	AVF_RX_DESC_STATUS_UMBCAST
+#define AVF_RXD_QW1_STATUS_UMBCAST_MASK	(0x3UL << \
+					 AVF_RXD_QW1_STATUS_UMBCAST_SHIFT)
+
+enum avf_rx_desc_fltstat_values {
+	AVF_RX_DESC_FLTSTAT_NO_DATA	= 0,
+	AVF_RX_DESC_FLTSTAT_RSV_FD_ID	= 1, /* 16byte desc? FD_ID : RSV */
+	AVF_RX_DESC_FLTSTAT_RSV	= 2,
+	AVF_RX_DESC_FLTSTAT_RSS_HASH	= 3,
+};
+
+#define AVF_RXD_PACKET_TYPE_UNICAST	0
+#define AVF_RXD_PACKET_TYPE_MULTICAST	1
+#define AVF_RXD_PACKET_TYPE_BROADCAST	2
+#define AVF_RXD_PACKET_TYPE_MIRRORED	3
+
+#define AVF_RXD_QW1_ERROR_SHIFT	19
+#define AVF_RXD_QW1_ERROR_MASK		(0xFFUL << AVF_RXD_QW1_ERROR_SHIFT)
+
+enum avf_rx_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_ERROR_RXE_SHIFT		= 0,
+	AVF_RX_DESC_ERROR_RECIPE_SHIFT		= 1,
+	AVF_RX_DESC_ERROR_HBO_SHIFT		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_SHIFT		= 3, /* 3 BITS */
+	AVF_RX_DESC_ERROR_IPE_SHIFT		= 3,
+	AVF_RX_DESC_ERROR_L4E_SHIFT		= 4,
+	AVF_RX_DESC_ERROR_EIPE_SHIFT		= 5,
+	AVF_RX_DESC_ERROR_OVERSIZE_SHIFT	= 6,
+	AVF_RX_DESC_ERROR_PPRS_SHIFT		= 7
+};
+
+enum avf_rx_desc_error_l3l4e_fcoe_masks {
+	AVF_RX_DESC_ERROR_L3L4E_NONE		= 0,
+	AVF_RX_DESC_ERROR_L3L4E_PROT		= 1,
+	AVF_RX_DESC_ERROR_L3L4E_FC		= 2,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR	= 3,
+	AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN	= 4
+};
+
+#define AVF_RXD_QW1_PTYPE_SHIFT	30
+#define AVF_RXD_QW1_PTYPE_MASK		(0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT)
+
+/* Packet type non-ip values */
+enum avf_rx_l2_ptype {
+	AVF_RX_PTYPE_L2_RESERVED			= 0,
+	AVF_RX_PTYPE_L2_MAC_PAY2			= 1,
+	AVF_RX_PTYPE_L2_TIMESYNC_PAY2			= 2,
+	AVF_RX_PTYPE_L2_FIP_PAY2			= 3,
+	AVF_RX_PTYPE_L2_OUI_PAY2			= 4,
+	AVF_RX_PTYPE_L2_MACCNTRL_PAY2			= 5,
+	AVF_RX_PTYPE_L2_LLDP_PAY2			= 6,
+	AVF_RX_PTYPE_L2_ECP_PAY2			= 7,
+	AVF_RX_PTYPE_L2_EVB_PAY2			= 8,
+	AVF_RX_PTYPE_L2_QCN_PAY2			= 9,
+	AVF_RX_PTYPE_L2_EAPOL_PAY2			= 10,
+	AVF_RX_PTYPE_L2_ARP				= 11,
+	AVF_RX_PTYPE_L2_FCOE_PAY3			= 12,
+	AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3		= 13,
+	AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3		= 14,
+	AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3		= 15,
+	AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA		= 16,
+	AVF_RX_PTYPE_L2_FCOE_VFT_PAY3			= 17,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA		= 18,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY			= 19,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP			= 20,
+	AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER		= 21,
+	AVF_RX_PTYPE_GRENAT4_MAC_PAY3			= 58,
+	AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4	= 87,
+	AVF_RX_PTYPE_GRENAT6_MAC_PAY3			= 124,
+	AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4	= 153
+};
+
+struct avf_rx_ptype_decoded {
+	u32 ptype:8;
+	u32 known:1;
+	u32 outer_ip:1;
+	u32 outer_ip_ver:1;
+	u32 outer_frag:1;
+	u32 tunnel_type:3;
+	u32 tunnel_end_prot:2;
+	u32 tunnel_end_frag:1;
+	u32 inner_prot:4;
+	u32 payload_layer:3;
+};
+
+enum avf_rx_ptype_outer_ip {
+	AVF_RX_PTYPE_OUTER_L2	= 0,
+	AVF_RX_PTYPE_OUTER_IP	= 1
+};
+
+enum avf_rx_ptype_outer_ip_ver {
+	AVF_RX_PTYPE_OUTER_NONE	= 0,
+	AVF_RX_PTYPE_OUTER_IPV4	= 0,
+	AVF_RX_PTYPE_OUTER_IPV6	= 1
+};
+
+enum avf_rx_ptype_outer_fragmented {
+	AVF_RX_PTYPE_NOT_FRAG	= 0,
+	AVF_RX_PTYPE_FRAG	= 1
+};
+
+enum avf_rx_ptype_tunnel_type {
+	AVF_RX_PTYPE_TUNNEL_NONE		= 0,
+	AVF_RX_PTYPE_TUNNEL_IP_IP		= 1,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT		= 2,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC	= 3,
+	AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN	= 4,
+};
+
+enum avf_rx_ptype_tunnel_end_prot {
+	AVF_RX_PTYPE_TUNNEL_END_NONE	= 0,
+	AVF_RX_PTYPE_TUNNEL_END_IPV4	= 1,
+	AVF_RX_PTYPE_TUNNEL_END_IPV6	= 2,
+};
+
+enum avf_rx_ptype_inner_prot {
+	AVF_RX_PTYPE_INNER_PROT_NONE		= 0,
+	AVF_RX_PTYPE_INNER_PROT_UDP		= 1,
+	AVF_RX_PTYPE_INNER_PROT_TCP		= 2,
+	AVF_RX_PTYPE_INNER_PROT_SCTP		= 3,
+	AVF_RX_PTYPE_INNER_PROT_ICMP		= 4,
+	AVF_RX_PTYPE_INNER_PROT_TIMESYNC	= 5
+};
+
+enum avf_rx_ptype_payload_layer {
+	AVF_RX_PTYPE_PAYLOAD_LAYER_NONE	= 0,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2	= 1,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3	= 2,
+	AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4	= 3,
+};
+
+#define AVF_RX_PTYPE_BIT_MASK		0x0FFFFFFF
+#define AVF_RX_PTYPE_SHIFT		56
+
+#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT	38
+#define AVF_RXD_QW1_LENGTH_PBUF_MASK	(0x3FFFULL << \
+					 AVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT	52
+#define AVF_RXD_QW1_LENGTH_HBUF_MASK	(0x7FFULL << \
+					 AVF_RXD_QW1_LENGTH_HBUF_SHIFT)
+
+#define AVF_RXD_QW1_LENGTH_SPH_SHIFT	63
+#define AVF_RXD_QW1_LENGTH_SPH_MASK	BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT)
+
+#define AVF_RXD_QW1_NEXTP_SHIFT	38
+#define AVF_RXD_QW1_NEXTP_MASK		(0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT)
+
+#define AVF_RXD_QW2_EXT_STATUS_SHIFT	0
+#define AVF_RXD_QW2_EXT_STATUS_MASK	(0xFFFFFUL << \
+					 AVF_RXD_QW2_EXT_STATUS_SHIFT)
+
+enum avf_rx_desc_ext_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT	= 0,
+	AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT	= 1,
+	AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT	= 2, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT	= 4, /* 2 BITS */
+	AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT	= 9,
+	AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT	= 10,
+	AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT	= 11,
+};
+
+#define AVF_RXD_QW2_L2TAG2_SHIFT	0
+#define AVF_RXD_QW2_L2TAG2_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT)
+
+#define AVF_RXD_QW2_L2TAG3_SHIFT	16
+#define AVF_RXD_QW2_L2TAG3_MASK	(0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT)
+
+enum avf_rx_desc_pe_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_DESC_PE_STATUS_QPID_SHIFT	= 0, /* 18 BITS */
+	AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT	= 0, /* 16 BITS */
+	AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT	= 16, /* 8 BITS */
+	AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT	= 24,
+	AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT	= 25,
+	AVF_RX_DESC_PE_STATUS_PORTV_SHIFT	= 26,
+	AVF_RX_DESC_PE_STATUS_URG_SHIFT	= 27,
+	AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT	= 28,
+	AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT	= 29
+};
+
+#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT		38
+#define AVF_RX_PROG_STATUS_DESC_LENGTH			0x2000000
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT	2
+#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK	(0x7UL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT	0
+#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK	(0x7FFFUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT)
+
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT	19
+#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK		(0x3FUL << \
+				AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT)
+
+enum avf_rx_prog_status_desc_status_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_DD_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT	= 2 /* 3 BITS */
+};
+
+enum avf_rx_prog_status_desc_prog_id_masks {
+	AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS	= 4,
+};
+
+enum avf_rx_prog_status_desc_error_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT	= 0,
+	AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT	= 1,
+	AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT	= 2,
+	AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT	= 3
+};
+
+#define AVF_TWO_BIT_MASK	0x3
+#define AVF_THREE_BIT_MASK	0x7
+#define AVF_FOUR_BIT_MASK	0xF
+#define AVF_EIGHTEEN_BIT_MASK	0x3FFFF
+
+/* TX Descriptor */
+struct avf_tx_desc {
+	__le64 buffer_addr; /* Address of descriptor's data buf */
+	__le64 cmd_type_offset_bsz;
+};
+
+#define AVF_TXD_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_QW1_DTYPE_MASK		(0xFUL << AVF_TXD_QW1_DTYPE_SHIFT)
+
+enum avf_tx_desc_dtype_value {
+	AVF_TX_DESC_DTYPE_DATA		= 0x0,
+	AVF_TX_DESC_DTYPE_NOP		= 0x1, /* same as Context desc */
+	AVF_TX_DESC_DTYPE_CONTEXT	= 0x1,
+	AVF_TX_DESC_DTYPE_FCOE_CTX	= 0x2,
+	AVF_TX_DESC_DTYPE_FILTER_PROG	= 0x8,
+	AVF_TX_DESC_DTYPE_DDP_CTX	= 0x9,
+	AVF_TX_DESC_DTYPE_FLEX_DATA	= 0xB,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_1	= 0xC,
+	AVF_TX_DESC_DTYPE_FLEX_CTX_2	= 0xD,
+	AVF_TX_DESC_DTYPE_DESC_DONE	= 0xF
+};
+
+#define AVF_TXD_QW1_CMD_SHIFT	4
+#define AVF_TXD_QW1_CMD_MASK	(0x3FFUL << AVF_TXD_QW1_CMD_SHIFT)
+
+enum avf_tx_desc_cmd_bits {
+	AVF_TX_DESC_CMD_EOP			= 0x0001,
+	AVF_TX_DESC_CMD_RS			= 0x0002,
+	AVF_TX_DESC_CMD_ICRC			= 0x0004,
+	AVF_TX_DESC_CMD_IL2TAG1		= 0x0008,
+	AVF_TX_DESC_CMD_DUMMY			= 0x0010,
+	AVF_TX_DESC_CMD_IIPT_NONIP		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV6		= 0x0020, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4		= 0x0040, /* 2 BITS */
+	AVF_TX_DESC_CMD_IIPT_IPV4_CSUM		= 0x0060, /* 2 BITS */
+	AVF_TX_DESC_CMD_FCOET			= 0x0080,
+	AVF_TX_DESC_CMD_L4T_EOFT_UNK		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_TCP		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_SCTP		= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_UDP		= 0x0300, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_N		= 0x0000, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_T		= 0x0100, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI	= 0x0200, /* 2 BITS */
+	AVF_TX_DESC_CMD_L4T_EOFT_EOF_A		= 0x0300, /* 2 BITS */
+};
+
+#define AVF_TXD_QW1_OFFSET_SHIFT	16
+#define AVF_TXD_QW1_OFFSET_MASK	(0x3FFFFULL << \
+					 AVF_TXD_QW1_OFFSET_SHIFT)
+
+enum avf_tx_desc_length_fields {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_DESC_LENGTH_MACLEN_SHIFT	= 0, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_IPLEN_SHIFT		= 7, /* 7 BITS */
+	AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT	= 14 /* 4 BITS */
+};
+
+#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define AVF_TXD_QW1_IPLEN_MASK  (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define AVF_TXD_QW1_L4LEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+#define AVF_TXD_QW1_FCLEN_MASK  (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT	34
+#define AVF_TXD_QW1_TX_BUF_SZ_MASK	(0x3FFFULL << \
+					 AVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+#define AVF_TXD_QW1_L2TAG1_SHIFT	48
+#define AVF_TXD_QW1_L2TAG1_MASK	(0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT)
+
+/* Context descriptors */
+struct avf_tx_context_desc {
+	__le32 tunneling_params;
+	__le16 l2tag2;
+	__le16 rsvd;
+	__le64 type_cmd_tso_mss;
+};
+
+#define AVF_TXD_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_CTX_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_CTX_QW1_CMD_SHIFT	4
+#define AVF_TXD_CTX_QW1_CMD_MASK	(0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT)
+
+enum avf_tx_ctx_desc_cmd_bits {
+	AVF_TX_CTX_DESC_TSO		= 0x01,
+	AVF_TX_CTX_DESC_TSYN		= 0x02,
+	AVF_TX_CTX_DESC_IL2TAG2	= 0x04,
+	AVF_TX_CTX_DESC_IL2TAG2_IL2H	= 0x08,
+	AVF_TX_CTX_DESC_SWTCH_NOTAG	= 0x00,
+	AVF_TX_CTX_DESC_SWTCH_UPLINK	= 0x10,
+	AVF_TX_CTX_DESC_SWTCH_LOCAL	= 0x20,
+	AVF_TX_CTX_DESC_SWTCH_VSI	= 0x30,
+	AVF_TX_CTX_DESC_SWPE		= 0x40
+};
+
+#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT	30
+#define AVF_TXD_CTX_QW1_TSO_LEN_MASK	(0x3FFFFULL << \
+					 AVF_TXD_CTX_QW1_TSO_LEN_SHIFT)
+
+#define AVF_TXD_CTX_QW1_MSS_SHIFT	50
+#define AVF_TXD_CTX_QW1_MSS_MASK	(0x3FFFULL << \
+					 AVF_TXD_CTX_QW1_MSS_SHIFT)
+
+#define AVF_TXD_CTX_QW1_VSI_SHIFT	50
+#define AVF_TXD_CTX_QW1_VSI_MASK	(0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT	0
+#define AVF_TXD_CTX_QW0_EXT_IP_MASK	(0x3ULL << \
+					 AVF_TXD_CTX_QW0_EXT_IP_SHIFT)
+
+enum avf_tx_ctx_desc_eipt_offload {
+	AVF_TX_CTX_EXT_IP_NONE		= 0x0,
+	AVF_TX_CTX_EXT_IP_IPV6		= 0x1,
+	AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM	= 0x2,
+	AVF_TX_CTX_EXT_IP_IPV4		= 0x3
+};
+
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT	2
+#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK	(0x3FULL << \
+					 AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_NATT_SHIFT	9
+#define AVF_TXD_CTX_QW0_NATT_MASK	(0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_UDP_TUNNELING	BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT)
+#define AVF_TXD_CTX_GRE_TUNNELING	(0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT)
+
+#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT	11
+#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK	BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT)
+
+#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST	AVF_TXD_CTX_QW0_EIP_NOINC_MASK
+
+#define AVF_TXD_CTX_QW0_NATLEN_SHIFT	12
+#define AVF_TXD_CTX_QW0_NATLEN_MASK	(0X7FULL << \
+					 AVF_TXD_CTX_QW0_NATLEN_SHIFT)
+
+#define AVF_TXD_CTX_QW0_DECTTL_SHIFT	19
+#define AVF_TXD_CTX_QW0_DECTTL_MASK	(0xFULL << \
+					 AVF_TXD_CTX_QW0_DECTTL_SHIFT)
+
+#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT	23
+#define AVF_TXD_CTX_QW0_L4T_CS_MASK	BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT)
+struct avf_nop_desc {
+	__le64 rsvd;
+	__le64 dtype_cmd;
+};
+
+#define AVF_TXD_NOP_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_NOP_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_NOP_QW1_CMD_SHIFT	4
+#define AVF_TXD_NOP_QW1_CMD_MASK	(0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT)
+
+enum avf_tx_nop_desc_cmd_bits {
+	/* Note: These are predefined bit offsets */
+	AVF_TX_NOP_DESC_EOP_SHIFT	= 0,
+	AVF_TX_NOP_DESC_RS_SHIFT	= 1,
+	AVF_TX_NOP_DESC_RSV_SHIFT	= 2 /* 5 bits */
+};
+
+struct avf_filter_program_desc {
+	__le32 qindex_flex_ptype_vsi;
+	__le32 rsvd;
+	__le32 dtype_cmd_cntindex;
+	__le32 fd_id;
+};
+#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT	0
+#define AVF_TXD_FLTR_QW0_QINDEX_MASK	(0x7FFUL << \
+					 AVF_TXD_FLTR_QW0_QINDEX_SHIFT)
+#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT	11
+#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK	(0x7UL << \
+					 AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT)
+#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT	17
+#define AVF_TXD_FLTR_QW0_PCTYPE_MASK	(0x3FUL << \
+					 AVF_TXD_FLTR_QW0_PCTYPE_SHIFT)
+
+/* Packet Classifier Types for filters */
+enum avf_filter_pctype {
+	/* Note: Values 0-28 are reserved for future use.
+	 * Value 29, 30, 32 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP	= 29,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP	= 30,
+	AVF_FILTER_PCTYPE_NONF_IPV4_UDP		= 31,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK	= 32,
+	AVF_FILTER_PCTYPE_NONF_IPV4_TCP		= 33,
+	AVF_FILTER_PCTYPE_NONF_IPV4_SCTP		= 34,
+	AVF_FILTER_PCTYPE_NONF_IPV4_OTHER		= 35,
+	AVF_FILTER_PCTYPE_FRAG_IPV4			= 36,
+	/* Note: Values 37-38 are reserved for future use.
+	 * Value 39, 40, 42 are not supported on XL710 and X710.
+	 */
+	AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP	= 39,
+	AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP	= 40,
+	AVF_FILTER_PCTYPE_NONF_IPV6_UDP		= 41,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK	= 42,
+	AVF_FILTER_PCTYPE_NONF_IPV6_TCP		= 43,
+	AVF_FILTER_PCTYPE_NONF_IPV6_SCTP		= 44,
+	AVF_FILTER_PCTYPE_NONF_IPV6_OTHER		= 45,
+	AVF_FILTER_PCTYPE_FRAG_IPV6			= 46,
+	/* Note: Value 47 is reserved for future use */
+	AVF_FILTER_PCTYPE_FCOE_OX			= 48,
+	AVF_FILTER_PCTYPE_FCOE_RX			= 49,
+	AVF_FILTER_PCTYPE_FCOE_OTHER			= 50,
+	/* Note: Values 51-62 are reserved for future use */
+	AVF_FILTER_PCTYPE_L2_PAYLOAD			= 63,
+};
+
+enum avf_filter_program_desc_dest {
+	AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET		= 0x0,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER	= 0x2,
+};
+
+enum avf_filter_program_desc_fd_status {
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE			= 0x0,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID		= 0x1,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES	= 0x2,
+	AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES		= 0x3,
+};
+
+#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT	23
+#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT	0
+#define AVF_TXD_FLTR_QW1_DTYPE_MASK	(0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CMD_SHIFT	4
+#define AVF_TXD_FLTR_QW1_CMD_MASK	(0xFFFFULL << \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_PCMD_SHIFT	(0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_PCMD_MASK	(0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT)
+
+enum avf_filter_program_desc_pcmd {
+	AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE	= 0x1,
+	AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE		= 0x2,
+};
+
+#define AVF_TXD_FLTR_QW1_DEST_SHIFT	(0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_DEST_MASK	(0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT	(0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT	(0x9ULL + \
+						 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \
+					  AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_ATR_SHIFT	(0xEULL + \
+					 AVF_TXD_FLTR_QW1_CMD_SHIFT)
+#define AVF_TXD_FLTR_QW1_ATR_MASK	BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT)
+
+#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20
+#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK	(0x1FFUL << \
+					 AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT)
+
+enum avf_filter_type {
+	AVF_FLOW_DIRECTOR_FLTR = 0,
+	AVF_PE_QUAD_HASH_FLTR = 1,
+	AVF_ETHERTYPE_FLTR,
+	AVF_FCOE_CTX_FLTR,
+	AVF_MAC_VLAN_FLTR,
+	AVF_HASH_FLTR
+};
+
+struct avf_vsi_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 vsi_number;
+	u16 vsis_allocated;
+	u16 vsis_unallocated;
+	u16 flags;
+	u8 pf_num;
+	u8 vf_num;
+	u8 connection_type;
+	struct avf_aqc_vsi_properties_data info;
+};
+
+struct avf_veb_context {
+	u16 seid;
+	u16 uplink_seid;
+	u16 veb_number;
+	u16 vebs_allocated;
+	u16 vebs_unallocated;
+	u16 flags;
+	struct avf_aqc_get_veb_parameters_completion info;
+};
+
+/* Statistics collected by each port, VSI, VEB, and S-channel */
+struct avf_eth_stats {
+	u64 rx_bytes;			/* gorc */
+	u64 rx_unicast;			/* uprc */
+	u64 rx_multicast;		/* mprc */
+	u64 rx_broadcast;		/* bprc */
+	u64 rx_discards;		/* rdpc */
+	u64 rx_unknown_protocol;	/* rupp */
+	u64 tx_bytes;			/* gotc */
+	u64 tx_unicast;			/* uptc */
+	u64 tx_multicast;		/* mptc */
+	u64 tx_broadcast;		/* bptc */
+	u64 tx_discards;		/* tdpc */
+	u64 tx_errors;			/* tepc */
+};
+
+/* Statistics collected per VEB per TC */
+struct avf_veb_tc_stats {
+	u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS];
+	u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS];
+};
+
+/* Statistics collected per function for FCoE */
+struct avf_fcoe_stats {
+	u64 rx_fcoe_packets;		/* fcoeprc */
+	u64 rx_fcoe_dwords;		/* focedwrc */
+	u64 rx_fcoe_dropped;		/* fcoerpdc */
+	u64 tx_fcoe_packets;		/* fcoeptc */
+	u64 tx_fcoe_dwords;		/* focedwtc */
+	u64 fcoe_bad_fccrc;		/* fcoecrc */
+	u64 fcoe_last_error;		/* fcoelast */
+	u64 fcoe_ddp_count;		/* fcoeddpc */
+};
+
+/* offset to per function FCoE statistics block */
+#define AVF_FCOE_VF_STAT_OFFSET	0
+#define AVF_FCOE_PF_STAT_OFFSET	128
+#define AVF_FCOE_STAT_MAX		(AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF)
+
+/* Statistics collected by the MAC */
+struct avf_hw_port_stats {
+	/* eth stats collected by the port */
+	struct avf_eth_stats eth;
+
+	/* additional port specific stats */
+	u64 tx_dropped_link_down;	/* tdold */
+	u64 crc_errors;			/* crcerrs */
+	u64 illegal_bytes;		/* illerrc */
+	u64 error_bytes;		/* errbc */
+	u64 mac_local_faults;		/* mlfc */
+	u64 mac_remote_faults;		/* mrfc */
+	u64 rx_length_errors;		/* rlec */
+	u64 link_xon_rx;		/* lxonrxc */
+	u64 link_xoff_rx;		/* lxoffrxc */
+	u64 priority_xon_rx[8];		/* pxonrxc[8] */
+	u64 priority_xoff_rx[8];	/* pxoffrxc[8] */
+	u64 link_xon_tx;		/* lxontxc */
+	u64 link_xoff_tx;		/* lxofftxc */
+	u64 priority_xon_tx[8];		/* pxontxc[8] */
+	u64 priority_xoff_tx[8];	/* pxofftxc[8] */
+	u64 priority_xon_2_xoff[8];	/* pxon2offc[8] */
+	u64 rx_size_64;			/* prc64 */
+	u64 rx_size_127;		/* prc127 */
+	u64 rx_size_255;		/* prc255 */
+	u64 rx_size_511;		/* prc511 */
+	u64 rx_size_1023;		/* prc1023 */
+	u64 rx_size_1522;		/* prc1522 */
+	u64 rx_size_big;		/* prc9522 */
+	u64 rx_undersize;		/* ruc */
+	u64 rx_fragments;		/* rfc */
+	u64 rx_oversize;		/* roc */
+	u64 rx_jabber;			/* rjc */
+	u64 tx_size_64;			/* ptc64 */
+	u64 tx_size_127;		/* ptc127 */
+	u64 tx_size_255;		/* ptc255 */
+	u64 tx_size_511;		/* ptc511 */
+	u64 tx_size_1023;		/* ptc1023 */
+	u64 tx_size_1522;		/* ptc1522 */
+	u64 tx_size_big;		/* ptc9522 */
+	u64 mac_short_packet_dropped;	/* mspdc */
+	u64 checksum_error;		/* xec */
+	/* flow director stats */
+	u64 fd_atr_match;
+	u64 fd_sb_match;
+	u64 fd_atr_tunnel_match;
+	u32 fd_atr_status;
+	u32 fd_sb_status;
+	/* EEE LPI */
+	u32 tx_lpi_status;
+	u32 rx_lpi_status;
+	u64 tx_lpi_count;		/* etlpic */
+	u64 rx_lpi_count;		/* erlpic */
+};
+
+/* Checksum and Shadow RAM pointers */
+#define AVF_SR_NVM_CONTROL_WORD		0x00
+#define AVF_SR_PCIE_ANALOG_CONFIG_PTR		0x03
+#define AVF_SR_PHY_ANALOG_CONFIG_PTR		0x04
+#define AVF_SR_OPTION_ROM_PTR			0x05
+#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR	0x06
+#define AVF_SR_AUTO_GENERATED_POINTERS_PTR	0x07
+#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR		0x08
+#define AVF_SR_EMP_GLOBAL_MODULE_PTR		0x09
+#define AVF_SR_RO_PCIE_LCB_PTR			0x0A
+#define AVF_SR_EMP_IMAGE_PTR			0x0B
+#define AVF_SR_PE_IMAGE_PTR			0x0C
+#define AVF_SR_CSR_PROTECTED_LIST_PTR		0x0D
+#define AVF_SR_MNG_CONFIG_PTR			0x0E
+#define AVF_EMP_MODULE_PTR			0x0F
+#define AVF_SR_EMP_MODULE_PTR			0x48
+#define AVF_SR_PBA_FLAGS			0x15
+#define AVF_SR_PBA_BLOCK_PTR			0x16
+#define AVF_SR_BOOT_CONFIG_PTR			0x17
+#define AVF_NVM_OEM_VER_OFF			0x83
+#define AVF_SR_NVM_DEV_STARTER_VERSION		0x18
+#define AVF_SR_NVM_WAKE_ON_LAN			0x19
+#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR	0x27
+#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR	0x28
+#define AVF_SR_NVM_MAP_VERSION			0x29
+#define AVF_SR_NVM_IMAGE_VERSION		0x2A
+#define AVF_SR_NVM_STRUCTURE_VERSION		0x2B
+#define AVF_SR_NVM_EETRACK_LO			0x2D
+#define AVF_SR_NVM_EETRACK_HI			0x2E
+#define AVF_SR_VPD_PTR				0x2F
+#define AVF_SR_PXE_SETUP_PTR			0x30
+#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR	0x31
+#define AVF_SR_NVM_ORIGINAL_EETRACK_LO		0x34
+#define AVF_SR_NVM_ORIGINAL_EETRACK_HI		0x35
+#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR	0x37
+#define AVF_SR_POR_REGS_AUTO_LOAD_PTR		0x38
+#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR		0x3A
+#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR	0x3B
+#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR	0x3C
+#define AVF_SR_PHY_ACTIVITY_LIST_PTR		0x3D
+#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR		0x3E
+#define AVF_SR_SW_CHECKSUM_WORD		0x3F
+#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR	0x40
+#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR	0x42
+#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR	0x44
+#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR	0x46
+#define AVF_SR_EMP_SR_SETTINGS_PTR		0x48
+#define AVF_SR_FEATURE_CONFIGURATION_PTR	0x49
+#define AVF_SR_CONFIGURATION_METADATA_PTR	0x4D
+#define AVF_SR_IMMEDIATE_VALUES_PTR		0x4E
+
+/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
+#define AVF_SR_VPD_MODULE_MAX_SIZE		1024
+#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE	1024
+#define AVF_SR_CONTROL_WORD_1_SHIFT		0x06
+#define AVF_SR_CONTROL_WORD_1_MASK	(0x03 << AVF_SR_CONTROL_WORD_1_SHIFT)
+#define AVF_SR_CONTROL_WORD_1_NVM_BANK_VALID	BIT(5)
+#define AVF_SR_NVM_MAP_STRUCTURE_TYPE		BIT(12)
+#define AVF_PTR_TYPE                           BIT(15)
+
+/* Shadow RAM related */
+#define AVF_SR_SECTOR_SIZE_IN_WORDS	0x800
+#define AVF_SR_BUF_ALIGNMENT		4096
+#define AVF_SR_WORDS_IN_1KB		512
+/* Checksum should be calculated such that after adding all the words,
+ * including the checksum word itself, the sum should be 0xBABA.
+ */
+#define AVF_SR_SW_CHECKSUM_BASE	0xBABA
+
+#define AVF_SRRD_SRCTL_ATTEMPTS	100000
+
+/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */
+
+enum i40E_fcoe_tx_ctx_desc_cmd_bits {
+	AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND	= 0x00, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2	= 0x01, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3	= 0x05, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2	= 0x02, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3	= 0x06, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2	= 0x03, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3	= 0x07, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL	= 0x08, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL	= 0x09, /* 4 BITS */
+	AVF_FCOE_TX_CTX_DESC_RELOFF			= 0x10,
+	AVF_FCOE_TX_CTX_DESC_CLRSEQ			= 0x20,
+	AVF_FCOE_TX_CTX_DESC_DIFENA			= 0x40,
+	AVF_FCOE_TX_CTX_DESC_IL2TAG2			= 0x80
+};
+
+/* FCoE DIF/DIX Context descriptor */
+struct avf_fcoe_difdix_context_desc {
+	__le64 flags_buff0_buff1_ref;
+	__le64 difapp_msk_bias;
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK	(0xFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT)
+
+enum avf_fcoe_difdix_ctx_desc_flags_bits {
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_RSVD				= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK		= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK		= 0x0004,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY		= 0x0008,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG	= 0x0010,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG	= 0x0018,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK		= 0x0020,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG		= 0x0040,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD			= 0x0060,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC			= 0x0080,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG			= 0x0000,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF			= 0x0100,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD			= 0x0200,
+	/* 2 BITS */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS		= 0x0300,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG			= 0x0000,
+	/* 1 BIT  */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG			= 0x0400,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B			= 0x0000,
+	/* 1 BIT */
+	AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K			= 0x0800
+};
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT	12
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT	22
+#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK	(0x3FFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT	0
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT	16
+#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK	(0xFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT)
+
+#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT	32
+#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK	(0xFFFFFFFFULL << \
+					AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT)
+
+/* FCoE DIF/DIX Buffers descriptor */
+struct avf_fcoe_difdix_buffers_desc {
+	__le64 buff_addr0;
+	__le64 buff_addr1;
+};
+
+/* FCoE DDP Context descriptor */
+struct avf_fcoe_ddp_context_desc {
+	__le64 rsvd;
+	__le64 type_cmd_foff_lsize;
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT	0
+#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK	(0xFULL << \
+					AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT	4
+#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK	(0xFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT)
+
+enum avf_fcoe_ddp_ctx_desc_cmd_bits {
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_512B	= 0x00, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_4K		= 0x01, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_8K		= 0x02, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_BSIZE_16K	= 0x03, /* 2 BITS */
+	AVF_FCOE_DDP_CTX_DESC_DIFENA		= 0x04, /* 1 BIT  */
+	AVF_FCOE_DDP_CTX_DESC_LASTSEQH		= 0x08, /* 1 BIT  */
+};
+
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT	16
+#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK	(0x3FFFULL << \
+					 AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT)
+
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT	32
+#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK	(0x3FFFULL << \
+					AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT)
+
+/* FCoE DDP/DWO Queue Context descriptor */
+struct avf_fcoe_queue_context_desc {
+	__le64 dmaindx_fbase;           /* 0:11 DMAINDX, 12:63 FBASE */
+	__le64 flen_tph;                /* 0:12 FLEN, 13:15 TPH */
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK	(0xFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT	12
+#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK	(0xFFFFFFFFFFFFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT	0
+#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK	(0x1FFFULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT	13
+#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK	(0x7ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT)
+
+enum avf_fcoe_queue_ctx_desc_tph_bits {
+	AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC	= 0x1,
+	AVF_FCOE_QUEUE_CTX_DESC_TPHDATA	= 0x2
+};
+
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT	30
+#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK	(0x3ULL << \
+					AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT)
+
+/* FCoE DDP/DWO Filter Context descriptor */
+struct avf_fcoe_filter_context_desc {
+	__le32 param;
+	__le16 seqn;
+
+	/* 48:51(0:3) RSVD, 52:63(4:15) DMAINDX */
+	__le16 rsvd_dmaindx;
+
+	/* 0:7 FLAGS, 8:52 RSVD, 53:63 LANQ */
+	__le64 flags_rsvd_lanq;
+};
+
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT	4
+#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK	(0xFFF << \
+					AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT)
+
+enum avf_fcoe_filter_ctx_desc_flags_bits {
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO	= 0x01,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP	= 0x02,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2	= 0x00,
+	AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3	= 0x04
+};
+
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT	0
+#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK	(0xFFULL << \
+					AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT     8
+#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK      (0x3FULL << \
+			AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT)
+
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT     53
+#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK      (0x7FFULL << \
+			AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT)
+
+enum avf_switch_element_types {
+	AVF_SWITCH_ELEMENT_TYPE_MAC	= 1,
+	AVF_SWITCH_ELEMENT_TYPE_PF	= 2,
+	AVF_SWITCH_ELEMENT_TYPE_VF	= 3,
+	AVF_SWITCH_ELEMENT_TYPE_EMP	= 4,
+	AVF_SWITCH_ELEMENT_TYPE_BMC	= 6,
+	AVF_SWITCH_ELEMENT_TYPE_PE	= 16,
+	AVF_SWITCH_ELEMENT_TYPE_VEB	= 17,
+	AVF_SWITCH_ELEMENT_TYPE_PA	= 18,
+	AVF_SWITCH_ELEMENT_TYPE_VSI	= 19,
+};
+
+/* Supported EtherType filters */
+enum avf_ether_type_index {
+	AVF_ETHER_TYPE_1588		= 0,
+	AVF_ETHER_TYPE_FIP		= 1,
+	AVF_ETHER_TYPE_OUI_EXTENDED	= 2,
+	AVF_ETHER_TYPE_MAC_CONTROL	= 3,
+	AVF_ETHER_TYPE_LLDP		= 4,
+	AVF_ETHER_TYPE_EVB_PROTOCOL1	= 5,
+	AVF_ETHER_TYPE_EVB_PROTOCOL2	= 6,
+	AVF_ETHER_TYPE_QCN_CNM		= 7,
+	AVF_ETHER_TYPE_8021X		= 8,
+	AVF_ETHER_TYPE_ARP		= 9,
+	AVF_ETHER_TYPE_RSV1		= 10,
+	AVF_ETHER_TYPE_RSV2		= 11,
+};
+
+/* Filter context base size is 1K */
+#define AVF_HASH_FILTER_BASE_SIZE	1024
+/* Supported Hash filter values */
+enum avf_hash_filter_size {
+	AVF_HASH_FILTER_SIZE_1K	= 0,
+	AVF_HASH_FILTER_SIZE_2K	= 1,
+	AVF_HASH_FILTER_SIZE_4K	= 2,
+	AVF_HASH_FILTER_SIZE_8K	= 3,
+	AVF_HASH_FILTER_SIZE_16K	= 4,
+	AVF_HASH_FILTER_SIZE_32K	= 5,
+	AVF_HASH_FILTER_SIZE_64K	= 6,
+	AVF_HASH_FILTER_SIZE_128K	= 7,
+	AVF_HASH_FILTER_SIZE_256K	= 8,
+	AVF_HASH_FILTER_SIZE_512K	= 9,
+	AVF_HASH_FILTER_SIZE_1M	= 10,
+};
+
+/* DMA context base size is 0.5K */
+#define AVF_DMA_CNTX_BASE_SIZE		512
+/* Supported DMA context values */
+enum avf_dma_cntx_size {
+	AVF_DMA_CNTX_SIZE_512		= 0,
+	AVF_DMA_CNTX_SIZE_1K		= 1,
+	AVF_DMA_CNTX_SIZE_2K		= 2,
+	AVF_DMA_CNTX_SIZE_4K		= 3,
+	AVF_DMA_CNTX_SIZE_8K		= 4,
+	AVF_DMA_CNTX_SIZE_16K		= 5,
+	AVF_DMA_CNTX_SIZE_32K		= 6,
+	AVF_DMA_CNTX_SIZE_64K		= 7,
+	AVF_DMA_CNTX_SIZE_128K		= 8,
+	AVF_DMA_CNTX_SIZE_256K		= 9,
+};
+
+/* Supported Hash look up table (LUT) sizes */
+enum avf_hash_lut_size {
+	AVF_HASH_LUT_SIZE_128		= 0,
+	AVF_HASH_LUT_SIZE_512		= 1,
+};
+
+/* Structure to hold a per PF filter control settings */
+struct avf_filter_control_settings {
+	/* number of PE Quad Hash filter buckets */
+	enum avf_hash_filter_size pe_filt_num;
+	/* number of PE Quad Hash contexts */
+	enum avf_dma_cntx_size pe_cntx_num;
+	/* number of FCoE filter buckets */
+	enum avf_hash_filter_size fcoe_filt_num;
+	/* number of FCoE DDP contexts */
+	enum avf_dma_cntx_size fcoe_cntx_num;
+	/* size of the Hash LUT */
+	enum avf_hash_lut_size	hash_lut_size;
+	/* enable FDIR filters for PF and its VFs */
+	bool enable_fdir;
+	/* enable Ethertype filters for PF and its VFs */
+	bool enable_ethtype;
+	/* enable MAC/VLAN filters for PF and its VFs */
+	bool enable_macvlan;
+};
+
+/* Structure to hold device level control filter counts */
+struct avf_control_filter_stats {
+	u16 mac_etype_used;   /* Used perfect match MAC/EtherType filters */
+	u16 etype_used;       /* Used perfect EtherType filters */
+	u16 mac_etype_free;   /* Un-used perfect match MAC/EtherType filters */
+	u16 etype_free;       /* Un-used perfect EtherType filters */
+};
+
+enum avf_reset_type {
+	AVF_RESET_POR		= 0,
+	AVF_RESET_CORER	= 1,
+	AVF_RESET_GLOBR	= 2,
+	AVF_RESET_EMPR		= 3,
+};
+
+/* IEEE 802.1AB LLDP Agent Variables from NVM */
+#define AVF_NVM_LLDP_CFG_PTR   0x06
+#define AVF_SR_LLDP_CFG_PTR    0x31
+struct avf_lldp_variables {
+	u16 length;
+	u16 adminstatus;
+	u16 msgfasttx;
+	u16 msgtxinterval;
+	u16 txparams;
+	u16 timers;
+	u16 crc8;
+};
+
+/* Offsets into Alternate Ram */
+#define AVF_ALT_STRUCT_FIRST_PF_OFFSET		0   /* in dwords */
+#define AVF_ALT_STRUCT_DWORDS_PER_PF		64   /* in dwords */
+#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET	0xD  /* in dwords */
+#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET	0xC  /* in dwords */
+#define AVF_ALT_STRUCT_MIN_BW_OFFSET		0xE  /* in dwords */
+#define AVF_ALT_STRUCT_MAX_BW_OFFSET		0xF  /* in dwords */
+
+/* Alternate Ram Bandwidth Masks */
+#define AVF_ALT_BW_VALUE_MASK		0xFF
+#define AVF_ALT_BW_RELATIVE_MASK	0x40000000
+#define AVF_ALT_BW_VALID_MASK		0x80000000
+
+/* RSS Hash Table Size */
+#define AVF_PFQF_CTL_0_HASHLUTSIZE_512	0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define AVF_L3_SRC_SHIFT		47
+#define AVF_L3_SRC_MASK		(0x3ULL << AVF_L3_SRC_SHIFT)
+#define AVF_L3_V6_SRC_SHIFT		43
+#define AVF_L3_V6_SRC_MASK		(0xFFULL << AVF_L3_V6_SRC_SHIFT)
+#define AVF_L3_DST_SHIFT		35
+#define AVF_L3_DST_MASK		(0x3ULL << AVF_L3_DST_SHIFT)
+#define AVF_L3_V6_DST_SHIFT		35
+#define AVF_L3_V6_DST_MASK		(0xFFULL << AVF_L3_V6_DST_SHIFT)
+#define AVF_L4_SRC_SHIFT		34
+#define AVF_L4_SRC_MASK		(0x1ULL << AVF_L4_SRC_SHIFT)
+#define AVF_L4_DST_SHIFT		33
+#define AVF_L4_DST_MASK		(0x1ULL << AVF_L4_DST_SHIFT)
+#define AVF_VERIFY_TAG_SHIFT		31
+#define AVF_VERIFY_TAG_MASK		(0x3ULL << AVF_VERIFY_TAG_SHIFT)
+
+#define AVF_FLEX_50_SHIFT		13
+#define AVF_FLEX_50_MASK		(0x1ULL << AVF_FLEX_50_SHIFT)
+#define AVF_FLEX_51_SHIFT		12
+#define AVF_FLEX_51_MASK		(0x1ULL << AVF_FLEX_51_SHIFT)
+#define AVF_FLEX_52_SHIFT		11
+#define AVF_FLEX_52_MASK		(0x1ULL << AVF_FLEX_52_SHIFT)
+#define AVF_FLEX_53_SHIFT		10
+#define AVF_FLEX_53_MASK		(0x1ULL << AVF_FLEX_53_SHIFT)
+#define AVF_FLEX_54_SHIFT		9
+#define AVF_FLEX_54_MASK		(0x1ULL << AVF_FLEX_54_SHIFT)
+#define AVF_FLEX_55_SHIFT		8
+#define AVF_FLEX_55_MASK		(0x1ULL << AVF_FLEX_55_SHIFT)
+#define AVF_FLEX_56_SHIFT		7
+#define AVF_FLEX_56_MASK		(0x1ULL << AVF_FLEX_56_SHIFT)
+#define AVF_FLEX_57_SHIFT		6
+#define AVF_FLEX_57_MASK		(0x1ULL << AVF_FLEX_57_SHIFT)
+
+/* Version format for Dynamic Device Personalization(DDP) */
+struct avf_ddp_version {
+	u8 major;
+	u8 minor;
+	u8 update;
+	u8 draft;
+};
+
+#define AVF_DDP_NAME_SIZE	32
+
+/* Package header */
+struct avf_package_header {
+	struct avf_ddp_version version;
+	u32 segment_count;
+	u32 segment_offset[1];
+};
+
+/* Generic segment header */
+struct avf_generic_seg_header {
+#define SEGMENT_TYPE_METADATA	0x00000001
+#define SEGMENT_TYPE_NOTES	0x00000002
+#define SEGMENT_TYPE_AVF	0x00000011
+#define SEGMENT_TYPE_X722	0x00000012
+	u32 type;
+	struct avf_ddp_version version;
+	u32 size;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_metadata_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+#define AVF_DDP_TRACKID_RDONLY		0
+#define AVF_DDP_TRACKID_INVALID	0xFFFFFFFF
+	u32 track_id;
+	char name[AVF_DDP_NAME_SIZE];
+};
+
+struct avf_device_id_entry {
+	u32 vendor_dev_id;
+	u32 sub_vendor_dev_id;
+};
+
+struct avf_profile_segment {
+	struct avf_generic_seg_header header;
+	struct avf_ddp_version version;
+	char name[AVF_DDP_NAME_SIZE];
+	u32 device_table_count;
+	struct avf_device_id_entry device_table[1];
+};
+
+struct avf_section_table {
+	u32 section_count;
+	u32 section_offset[1];
+};
+
+struct avf_profile_section_header {
+	u16 tbl_size;
+	u16 data_end;
+	struct {
+#define SECTION_TYPE_INFO	0x00000010
+#define SECTION_TYPE_MMIO	0x00000800
+#define SECTION_TYPE_RB_MMIO	0x00001800
+#define SECTION_TYPE_AQ		0x00000801
+#define SECTION_TYPE_RB_AQ	0x00001801
+#define SECTION_TYPE_NOTE	0x80000000
+#define SECTION_TYPE_NAME	0x80000001
+#define SECTION_TYPE_PROTO	0x80000002
+#define SECTION_TYPE_PCTYPE	0x80000003
+#define SECTION_TYPE_PTYPE	0x80000004
+		u32 type;
+		u32 offset;
+		u32 size;
+	} section;
+};
+
+struct avf_profile_tlv_section_record {
+	u8 rtype;
+	u8 type;
+	u16 len;
+	u8 data[12];
+};
+
+/* Generic AQ section in proflie */
+struct avf_profile_aq_section {
+	u16 opcode;
+	u16 flags;
+	u8  param[16];
+	u16 datalen;
+	u8  data[1];
+};
+
+struct avf_profile_info {
+	u32 track_id;
+	struct avf_ddp_version version;
+	u8 op;
+#define AVF_DDP_ADD_TRACKID		0x01
+#define AVF_DDP_REMOVE_TRACKID	0x02
+	u8 reserved[7];
+	u8 name[AVF_DDP_NAME_SIZE];
+};
+#endif /* _AVF_TYPE_H_ */
diff --git a/drivers/net/avf/base/virtchnl.h b/drivers/net/avf/base/virtchnl.h
new file mode 100644
index 0000000..167518f
--- /dev/null
+++ b/drivers/net/avf/base/virtchnl.h
@@ -0,0 +1,787 @@
+/*******************************************************************************
+
+Copyright (c) 2013 - 2015, Intel Corporation
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ 1. Redistributions of source code must retain the above copyright notice,
+    this list of conditions and the following disclaimer.
+
+ 2. Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in the
+    documentation and/or other materials provided with the distribution.
+
+ 3. Neither the name of the Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived from
+    this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+***************************************************************************/
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the VF-PF communication protocol used
+ * by the drivers for all devices starting from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI.  Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The return value
+ * is of status_code type, defined in the shared type.h.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+/* Error Codes */
+enum virtchnl_status_code {
+	VIRTCHNL_STATUS_SUCCESS				= 0,
+	VIRTCHNL_ERR_PARAM				= -5,
+	VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH		= -38,
+	VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR		= -39,
+	VIRTCHNL_STATUS_ERR_INVALID_VF_ID		= -40,
+	VIRTCHNL_STATUS_NOT_SUPPORTED			= -64,
+};
+
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT		0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT	0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT		0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT		0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT		0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT		0x6
+
+enum virtchnl_link_speed {
+	VIRTCHNL_LINK_SPEED_UNKNOWN	= 0,
+	VIRTCHNL_LINK_SPEED_100MB	= BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_1GB		= BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+	VIRTCHNL_LINK_SPEED_10GB	= BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_40GB	= BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_20GB	= BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+	VIRTCHNL_LINK_SPEED_25GB	= BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+	VIRTCHNL_RX_HSPLIT_NO_SPLIT      = 0,
+	VIRTCHNL_RX_HSPLIT_SPLIT_L2      = 1,
+	VIRTCHNL_RX_HSPLIT_SPLIT_IP      = 2,
+	VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+	VIRTCHNL_RX_HSPLIT_SPLIT_SCTP    = 8,
+};
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS	6
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ */
+	VIRTCHNL_OP_UNKNOWN = 0,
+	VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+	VIRTCHNL_OP_RESET_VF = 2,
+	VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+	VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+	VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+	VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+	VIRTCHNL_OP_ENABLE_QUEUES = 8,
+	VIRTCHNL_OP_DISABLE_QUEUES = 9,
+	VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+	VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+	VIRTCHNL_OP_ADD_VLAN = 12,
+	VIRTCHNL_OP_DEL_VLAN = 13,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+	VIRTCHNL_OP_GET_STATS = 15,
+	VIRTCHNL_OP_RSVD = 16,
+	VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+	VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG = 19,
+#endif
+#ifdef VIRTCHNL_IWARP
+	VIRTCHNL_OP_IWARP = 20, /* advanced opcode */
+	VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21, /* advanced opcode */
+	VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22, /* advanced opcode */
+#endif
+	VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+	VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+	VIRTCHNL_OP_SET_RSS_HENA = 26,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+	VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+	VIRTCHNL_OP_REQUEST_QUEUES = 29,
+
+};
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+	{virtchnl_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0)}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+	u8 pad[8];			 /* AQ flags/opcode/len/retval fields */
+	enum virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */
+	enum virtchnl_status_code v_retval;  /* ditto for desc->retval */
+	u32 vfid;			 /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures.*/
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR		1
+#define VIRTCHNL_VERSION_MINOR		1
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS	0
+
+struct virtchnl_version_info {
+	u32 major;
+	u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_v) (((_v)->major == 1) && ((_v)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+	VIRTCHNL_VSI_TYPE_INVALID = 0,
+	VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ * Version 1.1 VF sends this request to PF with u32 bitmap of its capabilities
+ * PF responds with an indirect message containing
+ * virtchnl_vf_resource and one or more
+ * virtchnl_vsi_resource structures.
+ */
+
+struct virtchnl_vsi_resource {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	enum virtchnl_vsi_type vsi_type;
+	u16 qset_handle;
+	u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2			0x00000001
+#define VIRTCHNL_VF_OFFLOAD_IWARP		0x00000002
+#define VIRTCHNL_VF_OFFLOAD_RSVD		0x00000004
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ		0x00000008
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG		0x00000010
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR		0x00000020
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES		0x00000040
+#define VIRTCHNL_VF_OFFLOAD_VLAN		0x00010000
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING		0x00020000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2	0x00040000
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF		0X00080000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP		0X00100000
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM		0X00200000
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+			       VIRTCHNL_VF_OFFLOAD_VLAN | \
+			       VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+	u16 num_vsis;
+	u16 num_queue_pairs;
+	u16 max_vectors;
+	u16 max_mtu;
+
+	u32 vf_cap_flags;
+	u32 rss_key_size;
+	u32 rss_lut_size;
+
+	struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u16 ring_len;		/* number of descriptors, multiple of 8 */
+	u16 headwb_enabled; /* deprecated with AVF 1.0 */
+	u64 dma_ring_addr;
+	u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+	u16 vsi_id;
+	u16 queue_id;
+	u32 ring_len;		/* number of descriptors, multiple of 32 */
+	u16 hdr_size;
+	u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+	u32 databuffer_size;
+	u32 max_pkt_size;
+	u32 pad1;
+	u64 dma_ring_addr;
+	enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */
+	u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for all active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ */
+struct virtchnl_queue_pair_info {
+	/* NOTE: vsi_id and queue_id should be identical for both queues. */
+	struct virtchnl_txq_info txq;
+	struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+	u16 vsi_id;
+	u16 num_queue_pairs;
+	u32 pad;
+	struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF.  Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated.  This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support.  If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+	u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0.
+ * PF configures interrupt mapping and returns status.
+ */
+struct virtchnl_vector_map {
+	u16 vsi_id;
+	u16 vector_id;
+	u16 rxq_map;
+	u16 txq_map;
+	u16 rxitr_idx;
+	u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+	u16 num_vectors;
+	struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ */
+struct virtchnl_queue_select {
+	u16 vsi_id;
+	u16 pad;
+	u32 rx_queues;
+	u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+struct virtchnl_ether_addr {
+	u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+	u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+	u16 vsi_id;
+	u16 num_elements;
+	struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+#ifdef VIRTCHNL_SOL_VF_SUPPORT
+/* VIRTCHNL_OP_GET_ADDNL_SOL_CONFIG
+ * VF sends this message to get the default MTU and list of additional ethernet
+ * addresses it is allowed to use.
+ * PF responds with an indirect message containing
+ * virtchnl_addnl_solaris_config with zero or more
+ * virtchnl_ether_addr structures.
+ *
+ * It is expected that this operation will only ever be needed for Solaris VFs
+ * running under a Solaris PF.
+ */
+struct virtchnl_addnl_solaris_config {
+	u16 default_mtu;
+	struct virtchnl_ether_addr_list al;
+};
+
+#endif
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+	u16 vsi_id;
+	u16 num_elements;
+	u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+	u16 vsi_id;
+	u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC	0x00000001
+#define FLAG_VF_MULTICAST_PROMISC	0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+	u64 rx_bytes;			/* received bytes */
+	u64 rx_unicast;			/* received unicast pkts */
+	u64 rx_multicast;		/* received multicast pkts */
+	u64 rx_broadcast;		/* received broadcast pkts */
+	u64 rx_discards;
+	u64 rx_unknown_protocol;
+	u64 tx_bytes;			/* transmitted bytes*/
+	u64 tx_unicast;			/* transmitted unicast pkts */
+	u64 tx_multicast;		/* transmitted multicast pkts */
+	u64 tx_broadcast;		/* transmitted broadcast pkts */
+	u64 tx_discards;
+	u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+	u16 vsi_id;
+	u16 key_len;
+	u8 key[1];         /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+	u16 vsi_id;
+	u16 lut_entries;
+	u8 lut[1];        /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+	u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+	VIRTCHNL_EVENT_UNKNOWN = 0,
+	VIRTCHNL_EVENT_LINK_CHANGE,
+	VIRTCHNL_EVENT_RESET_IMPENDING,
+	VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO		0
+#define PF_EVENT_SEVERITY_ATTENTION	1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED	2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM	255
+
+struct virtchnl_pf_event {
+	enum virtchnl_event_codes event;
+	union {
+		struct {
+			enum virtchnl_link_speed link_speed;
+			bool link_status;
+		} link_event;
+	} event_data;
+
+	int severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+#ifdef VIRTCHNL_IWARP
+
+/* VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP
+ * VF uses this message to request PF to map IWARP vectors to IWARP queues.
+ * The request for this originates from the VF IWARP driver through
+ * a client interface between VF LAN and VF IWARP driver.
+ * A vector could have an AEQ and CEQ attached to it although
+ * there is a single AEQ per VF IWARP instance in which case
+ * most vectors will have an INVALID_IDX for aeq and valid idx for ceq.
+ * There will never be a case where there will be multiple CEQs attached
+ * to a single vector.
+ * PF configures interrupt mapping and returns status.
+ */
+
+/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
+ * In order for us to keep the interface simple, SW will define a
+ * unique type value for AEQ.
+ */
+#define QUEUE_TYPE_PE_AEQ  0x80
+#define QUEUE_INVALID_IDX  0xFFFF
+
+struct virtchnl_iwarp_qv_info {
+	u32 v_idx; /* msix_vector */
+	u16 ceq_idx;
+	u16 aeq_idx;
+	u8 itr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_iwarp_qv_info);
+
+struct virtchnl_iwarp_qvlist_info {
+	u32 num_vectors;
+	struct virtchnl_iwarp_qv_info qv_info[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_iwarp_qvlist_info);
+
+#endif
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+	VIRTCHNL_VFR_INPROGRESS = 0,
+	VIRTCHNL_VFR_COMPLETED,
+	VIRTCHNL_VFR_VFACTIVE,
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+			    u8 *msg, u16 msglen)
+{
+	bool err_msg_format = false;
+	int valid_len = 0;
+
+	/* Validate message length. */
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+		valid_len = sizeof(struct virtchnl_version_info);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		if (VF_IS_V11(ver))
+			valid_len = sizeof(u32);
+		break;
+	case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+		valid_len = sizeof(struct virtchnl_txq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+		valid_len = sizeof(struct virtchnl_rxq_info);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_vsi_queue_config_info *vqc =
+			    (struct virtchnl_vsi_queue_config_info *)msg;
+			valid_len += (vqc->num_queue_pairs *
+				      sizeof(struct
+					     virtchnl_queue_pair_info));
+			if (vqc->num_queue_pairs == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_irq_map_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_irq_map_info *vimi =
+			    (struct virtchnl_irq_map_info *)msg;
+			valid_len += (vimi->num_vectors *
+				      sizeof(struct virtchnl_vector_map));
+			if (vimi->num_vectors == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		valid_len = sizeof(struct virtchnl_ether_addr_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_ether_addr_list *veal =
+			    (struct virtchnl_ether_addr_list *)msg;
+			valid_len += veal->num_elements *
+			    sizeof(struct virtchnl_ether_addr);
+			if (veal->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+		valid_len = sizeof(struct virtchnl_vlan_filter_list);
+		if (msglen >= valid_len) {
+			struct virtchnl_vlan_filter_list *vfl =
+			    (struct virtchnl_vlan_filter_list *)msg;
+			valid_len += vfl->num_elements * sizeof(u16);
+			if (vfl->num_elements == 0)
+				err_msg_format = true;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		valid_len = sizeof(struct virtchnl_promisc_info);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		valid_len = sizeof(struct virtchnl_queue_select);
+		break;
+#ifdef VIRTCHNL_IWARP
+	case VIRTCHNL_OP_IWARP:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
+		break;
+	case VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP:
+		valid_len = sizeof(struct virtchnl_iwarp_qvlist_info);
+		if (msglen >= valid_len) {
+			struct virtchnl_iwarp_qvlist_info *qv =
+				(struct virtchnl_iwarp_qvlist_info *)msg;
+			if (qv->num_vectors == 0) {
+				err_msg_format = true;
+				break;
+			}
+			valid_len += ((qv->num_vectors - 1) *
+				sizeof(struct virtchnl_iwarp_qv_info));
+		}
+		break;
+#endif
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		valid_len = sizeof(struct virtchnl_rss_key);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_key *vrk =
+				(struct virtchnl_rss_key *)msg;
+			valid_len += vrk->key_len - 1;
+		}
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		valid_len = sizeof(struct virtchnl_rss_lut);
+		if (msglen >= valid_len) {
+			struct virtchnl_rss_lut *vrl =
+				(struct virtchnl_rss_lut *)msg;
+			valid_len += vrl->lut_entries - 1;
+		}
+		break;
+	case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+		break;
+	case VIRTCHNL_OP_SET_RSS_HENA:
+		valid_len = sizeof(struct virtchnl_rss_hena);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		break;
+	case VIRTCHNL_OP_REQUEST_QUEUES:
+		valid_len = sizeof(struct virtchnl_vf_res_request);
+		break;
+	/* These are always errors coming from the VF. */
+	case VIRTCHNL_OP_EVENT:
+	case VIRTCHNL_OP_UNKNOWN:
+	default:
+		return VIRTCHNL_ERR_PARAM;
+	}
+	/* few more checks */
+	if (err_msg_format || valid_len != msglen)
+		return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+	return 0;
+}
+#endif /* _VIRTCHNL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 02/14] net/avf: initialization of avf PMD
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 03/14] net/avf: enable queue and device Wenzhuo Lu
                             ` (12 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                      |   5 +
 drivers/net/Makefile                    |   1 +
 drivers/net/avf/Makefile                |  47 ++++
 drivers/net/avf/avf.h                   | 187 ++++++++++++++
 drivers/net/avf/avf_ethdev.c            | 435 ++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c             | 304 ++++++++++++++++++++++
 drivers/net/avf/rte_pmd_avf_version.map |   4 +
 mk/rte.app.mk                           |   1 +
 8 files changed, 984 insertions(+)
 create mode 100644 drivers/net/avf/Makefile
 create mode 100644 drivers/net/avf/avf.h
 create mode 100644 drivers/net/avf/avf_ethdev.c
 create mode 100644 drivers/net/avf/avf_vchnl.c
 create mode 100644 drivers/net/avf/rte_pmd_avf_version.map

diff --git a/config/common_base b/config/common_base
index e74febe..f333209 100644
--- a/config/common_base
+++ b/config/common_base
@@ -226,6 +226,11 @@ CONFIG_RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y
 CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 
 #
+# Compile burst-oriented AVF PMD driver
+#
+CONFIG_RTE_LIBRTE_AVF_PMD=y
+
+#
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
 #
 CONFIG_RTE_LIBRTE_MLX4_PMD=n
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 84b137f..c2fd7f5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -10,6 +10,7 @@ endif
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
 DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf
 DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
 DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
new file mode 100644
index 0000000..2376cfd
--- /dev/null
+++ b/drivers/net/avf/Makefile
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2017 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_avf.a
+
+CFLAGS += -O3
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
+LDLIBS += -lrte_bus_pci
+
+EXPORT_MAP := rte_pmd_avf_version.map
+
+LIBABIVER := 1
+
+#
+# Add extra flags for base driver files (also known as shared code)
+# to disable warnings
+#
+ifeq ($(CONFIG_RTE_TOOLCHAIN_ICC),y)
+CFLAGS_BASE_DRIVER =
+else ifeq ($(CONFIG_RTE_TOOLCHAIN_CLANG),y)
+CFLAGS_BASE_DRIVER = -Wno-pointer-to-int-cast
+else
+CFLAGS_BASE_DRIVER = -Wno-pointer-to-int-cast
+
+endif
+OBJS_BASE_DRIVER=$(sort $(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c))))
+$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))
+
+
+VPATH += $(SRCDIR)/base
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_adminq.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
+
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
new file mode 100644
index 0000000..4694cc5
--- /dev/null
+++ b/drivers/net/avf/avf.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_ETHDEV_H_
+#define _AVF_ETHDEV_H_
+
+#include <rte_kvargs.h>
+
+#define AVF_AQ_LEN               32
+#define AVF_AQ_BUF_SZ            4096
+#define AVF_RESET_WAIT_CNT       50
+#define AVF_BUF_SIZE_MIN         1024
+#define AVF_FRAME_SIZE_MAX       9728
+#define AVF_QUEUE_BASE_ADDR_UNIT 128
+
+#define AVF_MAX_NUM_QUEUES       16
+/* Vlan table size */
+#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
+
+#define AVF_NUM_MACADDR_MAX      64
+
+#define AVF_DEFAULT_RX_PTHRESH      8
+#define AVF_DEFAULT_RX_HTHRESH      8
+#define AVF_DEFAULT_RX_WTHRESH      0
+
+#define AVF_DEFAULT_RX_FREE_THRESH  32
+
+#define AVF_DEFAULT_TX_PTHRESH      32
+#define AVF_DEFAULT_TX_HTHRESH      0
+#define AVF_DEFAULT_TX_WTHRESH      0
+
+#define AVF_DEFAULT_TX_FREE_THRESH  32
+#define AVF_DEFAULT_TX_RS_THRESH 32
+
+#define AVF_BASIC_OFFLOAD_CAPS  ( \
+	VF_BASE_MODE_OFFLOADS | \
+	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
+#define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+/* Default queue interrupt throttling time in microseconds */
+#define AVF_ITR_INDEX_DEFAULT          0
+#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define AVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE               4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+struct avf_adapter;
+struct avf_rx_queue;
+struct avf_tx_queue;
+
+/* Structure that defines a VSI, associated with a adapter. */
+struct avf_vsi {
+	struct avf_adapter *adapter; /* Backreference to associated adapter */
+	uint16_t vsi_id;
+	uint16_t nb_qps;         /* Number of queue pairs VSI can occupy */
+	uint16_t nb_used_qps;    /* Number of queue pairs VSI uses */
+	uint16_t max_macaddrs;   /* Maximum number of MAC addresses */
+	uint16_t base_vector;
+	uint16_t msix_intr;      /* The MSIX interrupt binds to VSI */
+};
+
+/* TODO: is that correct to assume the max number to be 16 ?*/
+#define AVF_MAX_MSIX_VECTORS   16
+
+/* Structure to store private data specific for VF instance. */
+struct avf_info {
+	uint16_t num_queue_pairs;
+	uint16_t max_pkt_len; /* Maximum packet length */
+	uint16_t mac_num;     /* Number of MAC addresses */
+	uint32_t vlan[AVF_VLAN_TB_SIZE]; /* VLAN bit map */
+	bool promisc_unicast_enabled;
+	bool promisc_multicast_enabled;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* VF resource */
+	struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
+
+	volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_resp; /* buffer to store the adminq response from PF */
+
+	/* Event from pf */
+	bool dev_closed;
+	bool link_up;
+	enum virtchnl_link_speed link_speed;
+
+	struct avf_vsi vsi;
+	bool vf_reset;
+	uint64_t flags;
+
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint16_t nb_msix;   /* number of MSI-X interrupts on Rx */
+	uint16_t msix_base; /* msix vector base from */
+	/* queue bitmask for each vector */
+	uint16_t rxq_map[AVF_MAX_MSIX_VECTORS];
+};
+
+#define AVF_MAX_PKT_TYPE 256
+
+/* Structure to store private data for each VF instance. */
+struct avf_adapter {
+	struct avf_hw hw;
+	struct rte_eth_dev *eth_dev;
+	struct avf_info vf;
+};
+
+/* AVF_DEV_PRIVATE_TO */
+#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct avf_adapter *)adapter)
+#define AVF_DEV_PRIVATE_TO_VF(adapter) \
+	(&((struct avf_adapter *)adapter)->vf)
+#define AVF_DEV_PRIVATE_TO_HW(adapter) \
+	(&((struct avf_adapter *)adapter)->hw)
+
+/* AVF_VSI_TO */
+#define AVF_VSI_TO_HW(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->hw))
+#define AVF_VSI_TO_VF(vsi) \
+	(&(((struct avf_vsi *)vsi)->adapter->vf))
+#define AVF_VSI_TO_ETH_DEV(vsi) \
+	(((struct avf_vsi *)vsi)->adapter->eth_dev)
+
+static inline void
+avf_init_adminq_parameter(struct avf_hw *hw)
+{
+	hw->aq.num_arq_entries = AVF_AQ_LEN;
+	hw->aq.num_asq_entries = AVF_AQ_LEN;
+	hw->aq.arq_buf_size = AVF_AQ_BUF_SZ;
+	hw->aq.asq_buf_size = AVF_AQ_BUF_SZ;
+}
+
+static inline uint16_t
+avf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+/* structure used for sending and checking response of virtchnl ops */
+struct avf_cmd_info {
+	enum virtchnl_ops ops;
+	uint8_t *in_args;       /* buffer for sending */
+	uint32_t in_args_size;  /* buffer size for sending */
+	uint8_t *out_buffer;    /* buffer for response */
+	uint32_t out_size;      /* buffer size for response */
+};
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct avf_info *vf)
+{
+	rte_wmb();
+	vf->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+	vf->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops)
+{
+	int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+	if (!ret)
+		PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd);
+
+	return !ret;
+}
+
+int avf_check_api_version(struct avf_adapter *adapter);
+int avf_get_vf_resource(struct avf_adapter *adapter);
+void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+#endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
new file mode 100644
index 0000000..0ed6e1c
--- /dev/null
+++ b/drivers/net/avf/avf_ethdev.c
@@ -0,0 +1,435 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_interrupts.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+int avf_logtype_init;
+int avf_logtype_driver;
+static const struct rte_pci_id pci_id_avf_map[] = {
+	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static const struct eth_dev_ops avf_eth_dev_ops = {
+};
+
+static int
+avf_check_vf_reset_done(struct avf_hw *hw)
+{
+	int i, reset;
+
+	for (i = 0; i < AVF_RESET_WAIT_CNT; i++) {
+		reset = AVF_READ_REG(hw, AVFGEN_RSTAT) &
+			AVFGEN_RSTAT_VFR_STATE_MASK;
+		reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT;
+		if (reset == VIRTCHNL_VFR_VFACTIVE ||
+		    reset == VIRTCHNL_VFR_COMPLETED)
+			break;
+		rte_delay_ms(20);
+	}
+
+	if (i >= AVF_RESET_WAIT_CNT)
+		return -1;
+
+	return 0;
+}
+
+static int
+avf_init_vf(struct rte_eth_dev *dev)
+{
+	int i, err, bufsz;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	err = avf_set_mac_type(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
+		goto err;
+	}
+
+	err = avf_check_vf_reset_done(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "VF is still resetting");
+		goto err;
+	}
+
+	avf_init_adminq_parameter(hw);
+	err = avf_init_adminq(hw);
+	if (err) {
+		PMD_INIT_LOG(ERR, "init_adminq failed: %d", err);
+		goto err;
+	}
+
+	vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0);
+	if (!vf->aq_resp) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory");
+		goto err_aq;
+	}
+	if (avf_check_api_version(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "check_api version failed");
+		goto err_api;
+	}
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource));
+	vf->vf_res = rte_zmalloc("vf_res", bufsz, 0);
+	if (!vf->vf_res) {
+		PMD_INIT_LOG(ERR, "unable to allocate vf_res memory");
+		goto err_api;
+	}
+	if (avf_get_vf_resource(adapter) != 0) {
+		PMD_INIT_LOG(ERR, "avf_get_vf_config failed");
+		goto err_alloc;
+	}
+	/* Allocate memort for RSS info */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		vf->rss_key = rte_zmalloc("rss_key",
+					  vf->vf_res->rss_key_size, 0);
+		if (!vf->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_rss;
+		}
+		vf->rss_lut = rte_zmalloc("rss_lut",
+					  vf->vf_res->rss_lut_size, 0);
+		if (!vf->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+	return 0;
+err_rss:
+	rte_free(vf->rss_key);
+	rte_free(vf->rss_lut);
+err_alloc:
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+err_api:
+	rte_free(vf->aq_resp);
+err_aq:
+	avf_shutdown_adminq(hw);
+err:
+	return -1;
+}
+
+/* Enable default admin queue interrupt setting */
+static inline void
+avf_enable_irq0(struct avf_hw *hw)
+{
+	/* Enable admin queue interrupt trigger */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK);
+
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK |
+					    AVFINT_DYN_CTL01_ITR_INDX_MASK);
+
+	AVF_WRITE_FLUSH(hw);
+}
+
+static inline void
+avf_disable_irq0(struct avf_hw *hw)
+{
+	/* Disable all interrupt types */
+	AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0);
+	AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+		      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	AVF_WRITE_FLUSH(hw);
+}
+
+static void
+avf_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	avf_disable_irq0(hw);
+
+	avf_handle_virtchnl_msg(dev);
+
+done:
+	avf_enable_irq0(hw);
+}
+
+static int
+avf_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* assign ops func pointer */
+	eth_dev->dev_ops = &avf_eth_dev_ops;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->bus.bus_id = pci_dev->addr.bus;
+	hw->bus.device = pci_dev->addr.devid;
+	hw->bus.func = pci_dev->addr.function;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	adapter->eth_dev = eth_dev;
+
+	if (avf_init_vf(eth_dev) != 0) {
+		PMD_INIT_LOG(ERR, "Init vf failed");
+		return -1;
+	}
+
+	/* copy mac addr */
+	eth_dev->data->mac_addrs = rte_zmalloc(
+					"avf_mac",
+					ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX,
+					0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to"
+			     " store MAC addresses",
+			     ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX);
+		return -ENOMEM;
+	}
+	/* If the MAC address is not configured by host,
+	 * generate a random one.
+	 */
+	if (!is_valid_assigned_ether_addr((struct ether_addr *)hw->mac.addr))
+		eth_random_addr(hw->mac.addr);
+	ether_addr_copy((struct ether_addr *)hw->mac.addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* register callback func to eal lib */
+	rte_intr_callback_register(&pci_dev->intr_handle,
+				   avf_dev_interrupt_handler,
+				   (void *)eth_dev);
+
+	/* enable uio intr after callback register */
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	/* configure and enable device interrupt */
+	avf_enable_irq0(hw);
+
+	return 0;
+}
+
+static void
+avf_dev_close(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	avf_shutdown_adminq(hw);
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* unregister callback func from eal lib */
+	rte_intr_callback_unregister(intr_handle,
+				     avf_dev_interrupt_handler, dev);
+	avf_disable_irq0(hw);
+}
+
+static int
+avf_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return -EPERM;
+
+	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+	if (hw->adapter_stopped == 0)
+		avf_dev_close(dev);
+
+	rte_free(vf->vf_res);
+	vf->vsi_res = NULL;
+	vf->vf_res = NULL;
+
+	rte_free(vf->aq_resp);
+	vf->aq_resp = NULL;
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	if (vf->rss_lut) {
+		rte_free(vf->rss_lut);
+		vf->rss_lut = NULL;
+	}
+	if (vf->rss_key) {
+		rte_free(vf->rss_key);
+		vf->rss_key = NULL;
+	}
+
+	return 0;
+}
+
+static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			     struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct avf_adapter), avf_dev_init);
+}
+
+static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit);
+}
+
+/* Adaptive virtual function driver struct */
+static struct rte_pci_driver rte_avf_pmd = {
+	.id_table = pci_id_avf_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.probe = eth_avf_pci_probe,
+	.remove = eth_avf_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci");
+RTE_INIT(avf_init_log);
+static void
+avf_init_log(void)
+{
+	avf_logtype_init = rte_log_register("pmd.avf.init");
+	if (avf_logtype_init >= 0)
+		rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE);
+	avf_logtype_driver = rte_log_register("pmd.avf.driver");
+	if (avf_logtype_driver >= 0)
+		rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE);
+}
+
+/* memory func for base code */
+enum avf_status_code
+avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw,
+		       struct avf_dma_mem *mem,
+		       u64 size,
+		       u32 alignment)
+{
+	const struct rte_memzone *mz = NULL;
+	char z_name[RTE_MEMZONE_NAMESIZE];
+
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand());
+	mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+					 alignment, RTE_PGSIZE_2M);
+	if (!mz)
+		return AVF_ERR_NO_MEMORY;
+
+	mem->size = size;
+	mem->va = mz->addr;
+	mem->pa = mz->phys_addr;
+	mem->zone = (const void *)mz;
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s allocated with physical address: %"PRIu64,
+		    mz->name, mem->pa);
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_free_dma_mem_d(__rte_unused struct avf_hw *hw,
+		   struct avf_dma_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	PMD_DRV_LOG(DEBUG,
+		    "memzone %s to be freed with physical address: %"PRIu64,
+		    ((const struct rte_memzone *)mem->zone)->name, mem->pa);
+	rte_memzone_free((const struct rte_memzone *)mem->zone);
+	mem->zone = NULL;
+	mem->va = NULL;
+	mem->pa = (u64)0;
+
+	return AVF_SUCCESS;
+}
+
+enum avf_status_code
+avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw,
+			struct avf_virt_mem *mem,
+			u32 size)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	mem->size = size;
+	mem->va = rte_zmalloc("avf", size, 0);
+
+	if (mem->va)
+		return AVF_SUCCESS;
+	else
+		return AVF_ERR_NO_MEMORY;
+}
+
+enum avf_status_code
+avf_free_virt_mem_d(__rte_unused struct avf_hw *hw,
+		    struct avf_virt_mem *mem)
+{
+	if (!mem)
+		return AVF_ERR_PARAM;
+
+	rte_free(mem->va);
+	mem->va = NULL;
+
+	return AVF_SUCCESS;
+}
+
+/* spinlock func for base code */
+void
+avf_init_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_init(&sp->spinlock);
+}
+
+void
+avf_acquire_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_lock(&sp->spinlock);
+}
+
+void
+avf_release_spinlock_d(struct avf_spinlock *sp)
+{
+	rte_spinlock_unlock(&sp->spinlock);
+}
+
+void
+avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
+{
+}
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
new file mode 100644
index 0000000..ebbee31
--- /dev/null
+++ b/drivers/net/avf/avf_vchnl.c
@@ -0,0 +1,304 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_dev.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_adminq_cmd.h"
+#include "base/avf_type.h"
+
+#include "avf.h"
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS  10
+
+/* Read data in admin queue to get msg from pf driver */
+static enum avf_status_code
+avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len,
+		     uint8_t *buf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event;
+	enum virtchnl_ops opcode;
+	int ret;
+
+	event.buf_len = buf_len;
+	event.msg_buf = buf;
+	ret = avf_clean_arq_element(hw, &event, NULL);
+	/* Can't read any msg from adminQ */
+	if (ret) {
+		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+		return ret;
+	}
+
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+	vf->cmd_retval = (enum virtchnl_status_code)rte_le_to_cpu_32(
+			event.desc.cookie_low);
+
+	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
+		    opcode, vf->cmd_retval);
+
+	if (opcode != vf->pend_cmd)
+		PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+			    vf->pend_cmd, opcode);
+
+	return AVF_SUCCESS;
+}
+
+static int
+avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_arq_event_info event_info;
+	enum avf_status_code ret;
+	int err = 0;
+	int i = 0;
+
+	if (_atomic_set_cmd(vf, args->ops))
+		return -1;
+
+	ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS,
+				    args->in_args, args->in_args_size, NULL);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+		_clear_cmd(vf);
+		return err;
+	}
+
+	switch (args->ops) {
+	case VIRTCHNL_OP_RESET_VF:
+		/*no need to wait for response */
+		_clear_cmd(vf);
+		break;
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		/* for init virtchnl ops, need to poll the response */
+		do {
+			ret = avf_read_msg_from_pf(adapter, args->out_size,
+						   args->out_buffer);
+			if (ret == AVF_SUCCESS)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+		} while (i++ < MAX_TRY_TIMES);
+		if (i >= MAX_TRY_TIMES ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+		}
+		_clear_cmd(vf);
+		break;
+
+	default:
+		/* For other virtchnl ops in running time,
+		 * wait for the cmd done flag.
+		 */
+		do {
+			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+				break;
+			rte_delay_ms(ASQ_DELAY_MS);
+			/* If don't read msg or read sys event, continue */
+		} while (i++ < MAX_TRY_TIMES);
+		/* If there's no response is received, clear command */
+		if (i >= MAX_TRY_TIMES  ||
+		    vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "No response or return failure (%d)"
+				    " for cmd %d", vf->cmd_retval, args->ops);
+			_clear_cmd(vf);
+		}
+		break;
+	}
+
+	return err;
+}
+
+void
+avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_arq_event_info info;
+	uint16_t pending, aq_opc;
+	enum virtchnl_ops msg_opc;
+	enum avf_status_code msg_ret;
+	int ret;
+
+	info.buf_len = AVF_AQ_BUF_SZ;
+	if (!vf->aq_resp) {
+		PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL");
+		return;
+	}
+	info.msg_buf = vf->aq_resp;
+
+	pending = 1;
+	while (pending) {
+		ret = avf_clean_arq_element(hw, &info, &pending);
+
+		if (ret != AVF_SUCCESS) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ,"
+				    "ret: %d", ret);
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+		/* For the message sent from pf to vf, opcode is stored in
+		 * cookie_high of struct avf_aq_desc, while return error code
+		 * are stored in cookie_low, Which is done by PF driver.
+		 */
+		msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+						  info.desc.cookie_high);
+		msg_ret = (enum avf_status_code)rte_le_to_cpu_32(
+						  info.desc.cookie_low);
+		switch (aq_opc) {
+		case avf_aqc_opc_send_msg_to_vf:
+			if (msg_opc == VIRTCHNL_OP_EVENT) {
+				/* TODO */
+			} else {
+				/* read message and it's expected one */
+				if (msg_opc == vf->pend_cmd) {
+					vf->cmd_retval = msg_ret;
+					/* prevent compiler reordering */
+					rte_compiler_barrier();
+					_clear_cmd(vf);
+				} else
+					PMD_DRV_LOG(ERR, "command mismatch,"
+						    "expect %u, get %u",
+						    vf->pend_cmd, msg_opc);
+				PMD_DRV_LOG(DEBUG,
+					    "adminq response is received,"
+					    " opcode = %d", msg_opc);
+			}
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Request %u is not supported yet",
+				    aq_opc);
+			break;
+		}
+	}
+}
+
+#define VIRTCHNL_VERSION_MAJOR_START 1
+#define VIRTCHNL_VERSION_MINOR_START 1
+
+/* Check API version with sync wait until version read from admin queue */
+int
+avf_check_api_version(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_version_info version, *pver;
+	struct avf_cmd_info args;
+	int err;
+
+	version.major = VIRTCHNL_VERSION_MAJOR;
+	version.minor = VIRTCHNL_VERSION_MINOR;
+
+	args.ops = VIRTCHNL_OP_VERSION;
+	args.in_args = (uint8_t *)&version;
+	args.in_args_size = sizeof(version);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION");
+		return err;
+	}
+
+	pver = (struct virtchnl_version_info *)args.out_buffer;
+	vf->virtchnl_version = *pver;
+
+	if (vf->virtchnl_version.major < VIRTCHNL_VERSION_MAJOR_START ||
+	    (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START &&
+	     vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) {
+		PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower"
+			     " than (%u.%u) to support Adapative VF",
+			     VIRTCHNL_VERSION_MAJOR_START,
+			     VIRTCHNL_VERSION_MAJOR_START);
+		return -1;
+	} else if (vf->virtchnl_version.major > VIRTCHNL_VERSION_MAJOR ||
+		   (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR &&
+		    vf->virtchnl_version.minor > VIRTCHNL_VERSION_MINOR)) {
+		PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)",
+			     vf->virtchnl_version.major,
+			     vf->virtchnl_version.minor,
+			     VIRTCHNL_VERSION_MAJOR,
+			     VIRTCHNL_VERSION_MINOR);
+		return -1;
+	}
+
+	PMD_DRV_LOG(DEBUG, "Peer is supported PF host");
+	return 0;
+}
+
+int
+avf_get_vf_resource(struct avf_adapter *adapter)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	uint32_t caps, len;
+	int err, i;
+
+	args.ops = VIRTCHNL_OP_GET_VF_RESOURCES;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	/* TODO: basic offload capabilities, need to
+	 * add advanced/optional offload capabilities
+	 */
+
+	caps = AVF_BASIC_OFFLOAD_CAPS;
+
+	args.in_args = (uint8_t *)&caps;
+	args.in_args_size = sizeof(caps);
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to execute command of "
+				 "OP_GET_VF_RESOURCE");
+		return -1;
+	}
+
+	len =  sizeof(struct virtchnl_vf_resource) +
+		      AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource);
+
+	rte_memcpy(vf->vf_res, args.out_buffer,
+		   RTE_MIN(args.out_size, len));
+	/* parse  VF config message back from PF*/
+	avf_parse_hw_config(hw, vf->vf_res);
+	for (i = 0; i < vf->vf_res->num_vsis; i++) {
+		if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV)
+			vf->vsi_res = &vf->vf_res->vsi_res[i];
+	}
+
+	if (!vf->vsi_res) {
+		PMD_INIT_LOG(ERR, "no LAN VSI found");
+		return -1;
+	}
+
+	vf->vsi.vsi_id = vf->vsi_res->vsi_id;
+	vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs;
+	vf->vsi.adapter = adapter;
+
+	return 0;
+}
diff --git a/drivers/net/avf/rte_pmd_avf_version.map b/drivers/net/avf/rte_pmd_avf_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/net/avf/rte_pmd_avf_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..78f23c5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -119,6 +119,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD)        += -lrte_pmd_ark
+_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD)        += -lrte_pmd_avf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD)        += -lrte_pmd_avp
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD)      += -lrte_pmd_bnx2x -lz
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD)       += -lrte_pmd_bnxt
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 03/14] net/avf: enable queue and device
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 02/14] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
                             ` (11 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

enable device and queue setup ops like:

 - dev_configure
 - dev_start
 - dev_stop
 - dev_close
 - dev_infos_get
 - rx_queue_start
 - rx_queue_stop
 - tx_queue_start
 - tx_queue_stop
 - rx_queue_setup
 - rx_queue_release
 - tx_queue_setup
 - tx_queue_release

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/avf/Makefile     |   1 +
 drivers/net/avf/avf.h        |  18 ++
 drivers/net/avf/avf_ethdev.c | 366 +++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.c   | 616 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   | 160 +++++++++++
 drivers/net/avf/avf_vchnl.c  | 359 ++++++++++++++++++++++++-
 6 files changed, 1518 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/avf/avf_rxtx.c
 create mode 100644 drivers/net/avf/avf_rxtx.h

diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 2376cfd..e172bf5 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -43,5 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 4694cc5..22886d4 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -38,6 +38,13 @@
 	VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
+#define AVF_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_FRAG_IPV4 |         \
+	ETH_RSS_NONFRAG_IPV4_TCP |  \
+	ETH_RSS_NONFRAG_IPV4_UDP |  \
+	ETH_RSS_NONFRAG_IPV4_SCTP | \
+	ETH_RSS_NONFRAG_IPV4_OTHER)
+
 #define AVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define AVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
 
@@ -184,4 +191,15 @@ struct avf_cmd_info {
 int avf_check_api_version(struct avf_adapter *adapter);
 int avf_get_vf_resource(struct avf_adapter *adapter);
 void avf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int avf_enable_vlan_strip(struct avf_adapter *adapter);
+int avf_disable_vlan_strip(struct avf_adapter *adapter);
+int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		     bool rx, bool on);
+int avf_enable_queues(struct avf_adapter *adapter);
+int avf_disable_queues(struct avf_adapter *adapter);
+int avf_configure_rss_lut(struct avf_adapter *adapter);
+int avf_configure_rss_key(struct avf_adapter *adapter);
+int avf_configure_queues(struct avf_adapter *adapter);
+int avf_config_irq_map(struct avf_adapter *adapter);
+void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 0ed6e1c..c53f00e 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -31,6 +31,14 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
+
+static int avf_dev_configure(struct rte_eth_dev *dev);
+static int avf_dev_start(struct rte_eth_dev *dev);
+static void avf_dev_stop(struct rte_eth_dev *dev);
+static void avf_dev_close(struct rte_eth_dev *dev);
+static void avf_dev_info_get(struct rte_eth_dev *dev,
+			     struct rte_eth_dev_info *dev_info);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -40,9 +48,366 @@
 };
 
 static const struct eth_dev_ops avf_eth_dev_ops = {
+	.dev_configure              = avf_dev_configure,
+	.dev_start                  = avf_dev_start,
+	.dev_stop                   = avf_dev_stop,
+	.dev_close                  = avf_dev_close,
+	.dev_infos_get              = avf_dev_info_get,
+	.rx_queue_start             = avf_dev_rx_queue_start,
+	.rx_queue_stop              = avf_dev_rx_queue_stop,
+	.tx_queue_start             = avf_dev_tx_queue_start,
+	.tx_queue_stop              = avf_dev_tx_queue_stop,
+	.rx_queue_setup             = avf_dev_rx_queue_setup,
+	.rx_queue_release           = avf_dev_rx_queue_release,
+	.tx_queue_setup             = avf_dev_tx_queue_setup,
+	.tx_queue_release           = avf_dev_tx_queue_release,
 };
 
 static int
+avf_dev_configure(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+
+	/* Vlan stripping setting */
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
+		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			avf_enable_vlan_strip(ad);
+		else
+			avf_disable_vlan_strip(ad);
+	}
+	return 0;
+}
+
+static int
+avf_init_rss(struct avf_adapter *adapter)
+{
+	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues,
+		       AVF_MAX_NUM_QUEUES);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (adapter->eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < vf->vf_res->rss_lut_size; i++)
+			vf->rss_lut[i] = 0;
+		ret = avf_configure_rss_lut(adapter);
+		return ret;
+	}
+
+	/* In AVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key) {
+		/* Calculate the default hash key */
+		for (i = 0; i <= vf->vf_res->rss_key_size; i++)
+			vf->rss_key[i] = (uint8_t)rte_rand();
+	} else
+		rte_memcpy(vf->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   vf->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0; i < vf->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		vf->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret)
+		return ret;
+	ret = avf_configure_rss_key(adapter);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int
+avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = dev->data;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= ETHER_MAX_LEN ||
+		    max_pkt_len > AVF_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)ETHER_MAX_LEN,
+				    (uint32_t)AVF_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < ETHER_MIN_LEN ||
+		    max_pkt_len > ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)ETHER_MIN_LEN,
+				    (uint32_t)ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+avf_init_queues(struct rte_eth_dev *dev)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)dev->data->tx_queues;
+	int i, ret = AVF_SUCCESS;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = avf_init_rxq(dev, rxq[i]);
+		if (ret != AVF_SUCCESS)
+			break;
+	}
+	/* TODO: set rx/tx function to vector/scatter/single-segment
+	 * according to parameters
+	 */
+	return ret;
+}
+
+static int
+avf_start_queues(struct rte_eth_dev *dev)
+{
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (avf_dev_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (avf_dev_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_start(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	uint16_t interval;
+	int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw->adapter_stopped = 0;
+
+	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	/* TODO: Rx interrupt */
+
+	if (avf_init_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "failed to do Queue init");
+		return -1;
+	}
+
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		if (avf_init_rss(adapter) != 0) {
+			PMD_DRV_LOG(ERR, "configure rss failed");
+			goto err_rss;
+		}
+	}
+
+	if (avf_configure_queues(adapter) != 0) {
+		PMD_DRV_LOG(ERR, "configure queues failed");
+		goto err_queue;
+	}
+
+	/* Map interrupt for writeback */
+	vf->nb_msix = 1;
+	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+		/* If WB_ON_ITR supports, enable it */
+		vf->msix_base = AVF_RX_VEC_START;
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+	} else {
+		/* If no WB_ON_ITR offload flags, need to set interrupt for
+		 * descriptor write back.
+		 */
+		vf->msix_base = AVF_MISC_VEC_ID;
+
+		/* set ITR to max */
+		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      (AVF_ITR_INDEX_DEFAULT <<
+			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+	}
+	AVF_WRITE_FLUSH(hw);
+	/* map all queues to the same interrupt */
+	for (i = 0; i < dev->data->nb_rx_queues; i++)
+		vf->rxq_map[0] |= 1 << i;
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		goto err_queue;
+	}
+
+	/* Set all mac addrs */
+	avf_add_del_all_mac_addr(adapter, TRUE);
+
+	if (avf_start_queues(dev) != 0) {
+		PMD_DRV_LOG(ERR, "enable queues failed");
+		goto err_mac;
+	}
+
+	/* TODO: enable interrupt for RX interrupt */
+	return 0;
+
+err_mac:
+	avf_add_del_all_mac_addr(adapter, FALSE);
+err_queue:
+err_rss:
+	return -1;
+}
+
+static void
+avf_dev_stop(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	int ret, i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (hw->adapter_stopped == 1)
+		return;
+
+	avf_stop_queues(dev);
+
+	/*TODO: Disable the interrupt for Rx*/
+
+	/* TODO: Rx interrupt vector mapping free */
+
+	/* remove all mac addrs */
+	avf_add_del_all_mac_addr(adapter, FALSE);
+	hw->adapter_stopped = 1;
+}
+
+static void
+avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	memset(dev_info, 0, sizeof(*dev_info));
+	dev_info->pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = vf->vf_res->rss_key_size;
+	dev_info->reta_size = vf->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL;
+	dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX;
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH,
+		.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
+				ETH_TXQ_FLAGS_NOOFFLOADS,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = AVF_MAX_RING_DESC,
+		.nb_min = AVF_MIN_RING_DESC,
+		.nb_align = AVF_ALIGN_RING_DESC,
+	};
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
@@ -250,6 +615,7 @@
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	avf_dev_stop(dev);
 	avf_shutdown_adminq(hw);
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
new file mode 100644
index 0000000..2d4fb4c
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.c
@@ -0,0 +1,616 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+#include <rte_net.h>
+
+#include "avf_log.h"
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+	/* The following constraints must be satisfied:
+	 *   thresh >= AVF_RX_MAX_BURST
+	 *   thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % thresh) == 0
+	 */
+	if (thresh < AVF_RX_MAX_BURST ||
+	    thresh >= nb_desc ||
+	    (nb_desc % thresh != 0)) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
+			     "greater than or equal to %u, "
+			     "and a divisor of %u",
+			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+		uint16_t tx_free_thresh)
+{
+	/* TX descriptors will have their RS bit set after tx_rs_thresh
+	 * descriptors have been used. The TX descriptor ring will be cleaned
+	 * after tx_free_thresh descriptors are used or if the number of
+	 * descriptors required to transmit a packet is greater than the
+	 * number of free TX descriptors.
+	 *
+	 * The following constraints must be satisfied:
+	 *  - tx_rs_thresh must be less than the size of the ring minus 2.
+	 *  - tx_free_thresh must be less than the size of the ring minus 3.
+	 *  - tx_rs_thresh must be less than or equal to tx_free_thresh.
+	 *  - tx_rs_thresh must be a divisor of the ring size.
+	 *
+	 * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+	 * race condition, hence the maximum threshold constraints. When set
+	 * to zero use default values.
+	 */
+	if (tx_rs_thresh >= (nb_desc - 2)) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 2",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+			     "number of TX descriptors (%u) minus 3.",
+			     tx_free_thresh, nb_desc);
+		return -EINVAL;
+	}
+	if (tx_rs_thresh > tx_free_thresh) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+			     "equal to tx_free_thresh (%u).",
+			     tx_rs_thresh, tx_free_thresh);
+		return -EINVAL;
+	}
+	if ((nb_desc % tx_rs_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+			     "number of TX descriptors (%u).",
+			     tx_rs_thresh, nb_desc);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct avf_rx_queue *rxq)
+{
+	uint16_t len, i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + AVF_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union avf_rx_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < AVF_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txe;
+	uint16_t i, prev, size;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_free = txq->nb_tx_desc - 1;
+
+	txq->next_dd = txq->rs_thresh - 1;
+	txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+alloc_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+#endif
+
+		rxq->sw_ring[i] = mbuf;
+	}
+
+	return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct avf_rx_queue *rxq)
+{
+	struct rte_mbuf *mbuf;
+	uint16_t i;
+
+	if (!rxq->sw_ring)
+		return;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		if (rxq->sw_ring[i]) {
+			rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+			rxq->sw_ring[i] = NULL;
+		}
+	}
+}
+
+static inline void
+release_txq_mbufs(struct avf_tx_queue *txq)
+{
+	uint16_t i;
+
+	if (!txq || !txq->sw_ring) {
+		PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+		return;
+	}
+
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		if (txq->sw_ring[i].mbuf) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+	}
+}
+
+int
+avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		       uint16_t nb_desc, unsigned int socket_id,
+		       const struct rte_eth_rxconf *rx_conf,
+		       struct rte_mempool *mp)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t len, i;
+	uint16_t rx_free_thresh;
+	uint16_t base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is "
+			     "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+			 AVF_DEFAULT_RX_FREE_THRESH :
+			 rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("avf rxq",
+				 sizeof(struct avf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->crc_len = 0; /* crc stripping by default */
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* Allocate the software ring. */
+	len = nb_desc + AVF_RX_MAX_BURST;
+	rxq->sw_ring =
+		rte_zmalloc_socket("avf rx sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!rxq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+
+	/* Allocate the maximun number of RX ring hardware descriptor with
+	 * a liitle more to support bulk allocate.
+	 */
+	len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST;
+	ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc),
+			      AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+		rte_free(rxq->sw_ring);
+		rte_free(rxq);
+		return -ENOMEM;
+	}
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = (union avf_rx_desc *)mz->addr;
+
+	rxq->mz = mz;
+	reset_rx_queue(rxq);
+	rxq->q_set = TRUE;
+	dev->data->rx_queues[queue_idx] = rxq;
+	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+		       uint16_t queue_idx,
+		       uint16_t nb_desc,
+		       unsigned int socket_id,
+		       const struct rte_eth_txconf *tx_conf)
+{
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	uint16_t i, base, bsf, tc_mapping;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (nb_desc % AVF_ALIGN_RING_DESC != 0 ||
+	    nb_desc > AVF_MAX_RING_DESC ||
+	    nb_desc < AVF_MIN_RING_DESC) {
+		PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is "
+			    "invalid", nb_desc);
+		return -EINVAL;
+	}
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+		tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+		tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("avf txq",
+				 sizeof(struct avf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for "
+			     "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->txq_flags = tx_conf->txq_flags;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/* Allocate software ring */
+	txq->sw_ring =
+		rte_zmalloc_socket("avf tx sw ring",
+				   sizeof(struct avf_tx_entry) * nb_desc,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (!txq->sw_ring) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		rte_free(txq);
+		return -ENOMEM;
+	}
+
+	/* Allocate TX hardware ring descriptors. */
+	ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC;
+	ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN);
+	mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+				      ring_size, AVF_RING_BASE_ALIGN,
+				      socket_id);
+	if (!mz) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+		rte_free(txq->sw_ring);
+		rte_free(txq);
+		return -ENOMEM;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->tx_ring = (struct avf_tx_desc *)mz->addr;
+
+	txq->mz = mz;
+	reset_tx_queue(txq);
+	txq->q_set = TRUE;
+	dev->data->tx_queues[queue_idx] = txq;
+	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+
+	return 0;
+}
+
+int
+avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	AVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	AVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+int
+avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	release_rxq_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int err;
+
+	PMD_DRV_FUNC_TRACE();
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	release_txq_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+avf_dev_rx_queue_release(void *rxq)
+{
+	struct avf_rx_queue *q = (struct avf_rx_queue *)rxq;
+
+	if (!q)
+		return;
+
+	release_rxq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_dev_tx_queue_release(void *txq)
+{
+	struct avf_tx_queue *q = (struct avf_tx_queue *)txq;
+
+	if (!q)
+		return;
+
+	release_txq_mbufs(q);
+	rte_free(q->sw_ring);
+	rte_memzone_free(q->mz);
+	rte_free(q);
+}
+
+void
+avf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	struct avf_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = avf_disable_queues(adapter);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		release_txq_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		release_rxq_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
new file mode 100644
index 0000000..e227cd1
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_H_
+#define _AVF_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define AVF_ALIGN_RING_DESC      32
+#define AVF_MIN_RING_DESC        64
+#define AVF_MAX_RING_DESC        4096
+#define AVF_DMA_MEM_ALIGN        4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define AVF_RING_BASE_ALIGN      128
+
+/* used for Rx Bulk Allocate */
+#define AVF_RX_MAX_BURST         32
+
+#define DEFAULT_TX_RS_THRESH     32
+#define DEFAULT_TX_FREE_THRESH   32
+
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+#define avf_rx_desc avf_16byte_rx_desc
+#else
+#define avf_rx_desc avf_32byte_rx_desc
+#endif
+
+/* Structure associated with each Rx queue. */
+struct avf_rx_queue {
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	const struct rte_memzone *mz; /* memzone for Rx ring */
+	volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;   /* Rx ring DMA address */
+	struct rte_mbuf **sw_ring;     /* address of SW ring */
+	uint16_t nb_rx_desc;          /* ring length */
+	uint16_t rx_tail;             /* current value of tail */
+	volatile uint8_t *qrx_tail;   /* register address of tail */
+	uint16_t rx_free_thresh;      /* max free RX desc to hold */
+	uint16_t nb_rx_hold;          /* number of held free RX desc */
+	struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
+	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+
+	bool q_set;             /* if rx queue has been configured */
+	bool rx_deferred_start; /* don't start this queue in dev start */
+};
+
+struct avf_tx_entry {
+	struct rte_mbuf *mbuf;
+	uint16_t next_id;
+	uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct avf_tx_queue {
+	const struct rte_memzone *mz;  /* memzone for Tx ring */
+	volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;    /* Tx ring DMA address */
+	struct avf_tx_entry *sw_ring;  /* address array of SW ring */
+	uint16_t nb_tx_desc;           /* ring length */
+	uint16_t tx_tail;              /* current value of tail */
+	volatile uint8_t *qtx_tail;    /* register address of tail */
+	/* number of used desc since RS bit set */
+	uint16_t nb_used;
+	uint16_t nb_free;
+	uint16_t last_desc_cleaned;    /* last desc have been cleaned*/
+	uint16_t free_thresh;
+	uint16_t rs_thresh;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+	uint32_t txq_flags;
+	uint16_t next_dd;              /* next to set RS, for VPMD */
+	uint16_t next_rs;              /* next to check DD,  for VPMD */
+
+	bool q_set;                    /* if rx queue has been configured */
+	bool tx_deferred_start;        /* don't start this queue in dev start */
+};
+
+int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
+int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void avf_dev_rx_queue_release(void *rxq);
+
+int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void avf_dev_tx_queue_release(void *txq);
+void avf_stop_queues(struct rte_eth_dev *dev);
+
+static inline
+void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
+	const union avf_16byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, rx_id, rx_desc->read.pkt_addr,
+	       rx_desc->read.hdr_addr);
+#else
+	const union avf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+#endif
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
+			    const void *desc, uint16_t tx_id)
+{
+	char *name;
+	const struct avf_tx_desc *tx_desc = desc;
+	enum avf_tx_desc_dtype_value type;
+
+	type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case AVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case AVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+#endif /* _AVF_RXTX_H_ */
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index ebbee31..55a425a 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -25,6 +25,7 @@
 #include "base/avf_type.h"
 
 #include "avf.h"
+#include "avf_rxtx.h"
 
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
@@ -196,6 +197,48 @@
 	}
 }
 
+int
+avf_enable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_ENABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
+int
+avf_disable_vlan_strip(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_cmd_info args;
+	int ret;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING;
+	args.in_args = NULL;
+	args.in_args_size = 0;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	ret = avf_execute_vf_cmd(adapter, &args);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " OP_DISABLE_VLAN_STRIPPING");
+
+	return ret;
+}
+
 #define VIRTCHNL_VERSION_MAJOR_START 1
 #define VIRTCHNL_VERSION_MINOR_START 1
 
@@ -274,8 +317,8 @@
 	err = avf_execute_vf_cmd(adapter, &args);
 
 	if (err) {
-		PMD_DRV_LOG(ERR, "Failed to execute command of "
-				 "OP_GET_VF_RESOURCE");
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_GET_VF_RESOURCE");
 		return -1;
 	}
 
@@ -302,3 +345,315 @@
 
 	return 0;
 }
+
+int
+avf_enable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_ENABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_disable_queues(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(adapter->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(adapter->eth_dev->data->nb_tx_queues) - 1;
+
+	args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
+
+int
+avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
+		 bool rx, bool on)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select queue_select;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = vf->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	if (on)
+		args.ops = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.ops = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.in_args = (u8 *)&queue_select;
+	args.in_args_size = sizeof(queue_select);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+avf_configure_rss_lut(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_lut *rss_lut;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = vf->vsi_res->vsi_id;
+	rss_lut->lut_entries = vf->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, vf->rss_lut, vf->vf_res->rss_lut_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.in_args = (u8 *)rss_lut;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+avf_configure_rss_key(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_rss_key *rss_key;
+	struct avf_cmd_info args;
+	int len, err = 0;
+
+	len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = vf->vsi_res->vsi_id;
+	rss_key->key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, vf->rss_key, vf->vf_res->rss_key_size);
+
+	args.ops = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.in_args = (u8 *)rss_key;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+int
+avf_configure_queues(struct avf_adapter *adapter)
+{
+	struct avf_rx_queue **rxq =
+		(struct avf_rx_queue **)adapter->eth_dev->data->rx_queues;
+	struct avf_tx_queue **txq =
+		(struct avf_tx_queue **)adapter->eth_dev->data->tx_queues;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct avf_cmd_info args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * vf->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = vf->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = vf->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < vf->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr;
+		}
+		vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+		/* Virtchnnl configure queues by pairs */
+		if (i < adapter->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.in_args = (uint8_t *)vc_config;
+	args.in_args_size = size;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+avf_config_irq_map(struct avf_adapter *adapter)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct avf_cmd_info args;
+	uint32_t vector_id;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = vf->nb_msix;
+	for (i = 0; i < vf->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = vf->vsi_res->vsi_id;
+		vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT;
+		vecmap->vector_id = vf->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
+	}
+
+	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.in_args = (u8 *)map_info;
+	args.in_args_size = len;
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
+
+void
+avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	struct avf_cmd_info args;
+	int len, err, i, j;
+	int next_begin = 0;
+	int begin = 0;
+
+	do {
+		j = 0;
+		len = sizeof(struct virtchnl_ether_addr_list);
+		for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			len += sizeof(struct virtchnl_ether_addr);
+			if (len >= AVF_AQ_BUF_SZ) {
+				next_begin = i + 1;
+				break;
+			}
+		}
+
+		list = rte_zmalloc("avf_del_mac_buffer", len, 0);
+		if (!list) {
+			PMD_DRV_LOG(ERR, "fail to allocate memory");
+			return;
+		}
+
+		for (i = begin; i < next_begin; i++) {
+			addr = &adapter->eth_dev->data->mac_addrs[i];
+			if (is_zero_ether_addr(addr))
+				continue;
+			rte_memcpy(list->list[j].addr, addr->addr_bytes,
+				   sizeof(addr->addr_bytes));
+			PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+				    addr->addr_bytes[0], addr->addr_bytes[1],
+				    addr->addr_bytes[2], addr->addr_bytes[3],
+				    addr->addr_bytes[4], addr->addr_bytes[5]);
+			j++;
+		}
+		list->vsi_id = vf->vsi_res->vsi_id;
+		list->num_elements = j;
+		args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			   VIRTCHNL_OP_DEL_ETH_ADDR;
+		args.in_args = (uint8_t *)list;
+		args.in_args_size = len;
+		args.out_buffer = vf->aq_resp;
+		args.out_size = AVF_AQ_BUF_SZ;
+		err = avf_execute_vf_cmd(adapter, &args);
+		if (err)
+			PMD_DRV_LOG(ERR, "fail to execute command %s",
+				    add ? "OP_ADD_ETHER_ADDRESS" :
+				    "OP_DEL_ETHER_ADDRESS");
+		rte_free(list);
+		begin = next_begin;
+	} while (begin < AVF_NUM_MACADDR_MAX);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 04/14] net/avf: enable basic Rx Tx func
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (2 preceding siblings ...)
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 03/14] net/avf: enable queue and device Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 05/14] net/avf: enable link status update Wenzhuo Lu
                             ` (10 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 MAINTAINERS                      |   1 +
 config/common_base               |   4 +
 doc/guides/nics/features/avf.ini |  22 ++
 drivers/net/avf/Makefile         |   3 +
 drivers/net/avf/avf_ethdev.c     |  36 +-
 drivers/net/avf/avf_log.h        |  21 ++
 drivers/net/avf/avf_rxtx.c       | 789 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_rxtx.h       |  53 +++
 8 files changed, 919 insertions(+), 10 deletions(-)
 create mode 100644 doc/guides/nics/features/avf.ini

diff --git a/MAINTAINERS b/MAINTAINERS
index 17f15b6..17067df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -430,6 +430,7 @@ Intel avf
 M: Jingjing Wu <jingjing.wu@intel.com>
 M: Wenzhuo Lu <wenzhuo.lu@intel.com>
 F: drivers/net/avf/
+F: doc/guides/nics/features/avf*.ini
 
 Mellanox mlx4
 M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
diff --git a/config/common_base b/config/common_base
index f333209..b1f1c1c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,10 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n
 
 #
 # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD
diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
new file mode 100644
index 0000000..8a294e9
--- /dev/null
+++ b/doc/guides/nics/features/avf.ini
@@ -0,0 +1,22 @@
+;
+; Supported features of the 'avf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+RSS hash             = Y
+CRC offload          = Y
+VLAN offload         = Y
+L3 checksum offload  = Y
+L4 checksum offload  = Y
+Packet type parsing  = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index e172bf5..8d54fc9 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -13,6 +13,9 @@ LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
 LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_hash
 LDLIBS += -lrte_bus_pci
 
+# used to dump HW descriptor for debugging
+# CFLAGS += -DDEBUG_DUMP_DESC
+
 EXPORT_MAP := rte_pmd_avf_version.map
 
 LIBABIVER := 1
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index c53f00e..4480989 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -39,6 +39,7 @@
 static void avf_dev_close(struct rte_eth_dev *dev);
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
+static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -53,6 +54,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_stop                   = avf_dev_stop,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
+	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -204,9 +206,12 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 		if (ret != AVF_SUCCESS)
 			break;
 	}
-	/* TODO: set rx/tx function to vector/scatter/single-segment
+	/* set rx/tx function to vector/scatter/single-segment
 	 * according to parameters
 	 */
+	avf_set_rx_function(dev);
+	avf_set_tx_function(dev);
+
 	return ret;
 }
 
@@ -407,6 +412,23 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	};
 }
 
+static const uint32_t *
+avf_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_L4_NONFRAG,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+	return ptypes;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -556,7 +578,19 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &avf_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &avf_recv_pkts;
+	eth_dev->tx_pkt_burst = &avf_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &avf_prep_pkts;
 
+	/* For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check if we need a different RX
+	 * and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		avf_set_rx_function(eth_dev);
+		avf_set_tx_function(eth_dev);
+		return 0;
+	}
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
 	hw->vendor_id = pci_dev->id.vendor_id;
diff --git a/drivers/net/avf/avf_log.h b/drivers/net/avf/avf_log.h
index e3f106b..8d574d3 100644
--- a/drivers/net/avf/avf_log.h
+++ b/drivers/net/avf/avf_log.h
@@ -20,4 +20,25 @@
 	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
 #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
 
+#ifdef RTE_LIBRTE_AVF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 #endif /* _AVF_LOG_H_ */
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 2d4fb4c..baccec4 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -34,17 +34,11 @@
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
 	/* The following constraints must be satisfied:
-	 *   thresh >= AVF_RX_MAX_BURST
 	 *   thresh < rxq->nb_rx_desc
-	 *   (rxq->nb_rx_desc % thresh) == 0
 	 */
-	if (thresh < AVF_RX_MAX_BURST ||
-	    thresh >= nb_desc ||
-	    (nb_desc % thresh != 0)) {
-		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u, "
-			     "greater than or equal to %u, "
-			     "and a divisor of %u",
-			     thresh, nb_desc, AVF_RX_MAX_BURST, nb_desc);
+	if (thresh >= nb_desc) {
+		PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+			     thresh, nb_desc);
 		return -EINVAL;
 	}
 	return 0;
@@ -614,3 +608,780 @@
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 }
+
+static inline void
+avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp)
+{
+	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		(1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->vlan_tci =
+			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+	} else {
+		mb->vlan_tci = 0;
+	}
+}
+
+/* Translate the rx descriptor status and error fields to pkt flags */
+static inline uint64_t
+avf_rxd_to_pkt_flags(uint64_t qword)
+{
+	uint64_t flags;
+	uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT);
+
+#define AVF_RX_ERR_BITS 0x3f
+
+	/* Check if RSS_HASH */
+	flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+					AVF_RX_DESC_FLTSTAT_RSS_HASH) ==
+			AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+
+	if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) {
+		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return flags;
+	}
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT)))
+		flags |= PKT_RX_IP_CKSUM_BAD;
+	else
+		flags |= PKT_RX_IP_CKSUM_GOOD;
+
+	if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT)))
+		flags |= PKT_RX_L4_CKSUM_BAD;
+	else
+		flags |= PKT_RX_L4_CKSUM_GOOD;
+
+	/* TODO: Oversize error bit is not processed here */
+
+	return flags;
+}
+
+/* implement recv_pkts */
+uint16_t
+avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	volatile union avf_rx_desc *rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_eth_dev *dev;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	uint16_t nb_rx;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint16_t rx_packet_len;
+	uint16_t rx_id, nb_hold;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit first */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+
+		rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = rx_packet_len;
+		rxm->data_len = rx_packet_len;
+		rxm->port = rxq->port_id;
+		rxm->ol_flags = 0;
+		avf_rxd_to_vlan_tci(rxm, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		rxm->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			rxm->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		rxm->ol_flags |= pkt_flags;
+
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the receive tail register of queue.
+	 * Update that register with the value of the last processed RX
+	 * descriptor minus 1.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+/* implement recv_scattered_pkts  */
+uint16_t
+avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	union avf_rx_desc rxd;
+	struct rte_mbuf *rxe;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *nmb, *rxm;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
+	struct rte_eth_dev *dev;
+	uint32_t rx_status;
+	uint64_t qword1;
+	uint64_t dma_addr;
+	uint64_t pkt_flags;
+
+	volatile union avf_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union avf_rx_desc *rxdp;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+		rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			    AVF_RXD_QW1_STATUS_SHIFT;
+
+		/* Check the DD bit */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+			break;
+		AVF_DUMP_RX_DESC(rxq, rxdp, rx_id);
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", rxq->port_id, rxq->queue_id);
+			dev = &rte_eth_devices[rxq->port_id];
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		rxd = *rxdp;
+		nb_hold++;
+		rxe = rxq->sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+
+		rxm = rxe;
+		rxe = nmb;
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+
+		/* Set data buffer address and data length of the mbuf */
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				 AVF_RXD_QW1_LENGTH_PBUF_SHIFT;
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+						rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		/* If this is not the last buffer of the received packet,
+		 * update the pointer to the last mbuf of the current scattered
+		 * packet and continue to parse the RX ring.
+		 */
+		if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		/* This is the last buffer of the received packet. If the CRC
+		 * is not stripped by the hardware:
+		 *  - Subtract the CRC length from the total packet length.
+		 *  - If the last buffer only contains the whole CRC or a part
+		 *  of it, free the mbuf associated to the last buffer. If part
+		 *  of the CRC is also contained in the previous mbuf, subtract
+		 *  the length of that CRC part from the data length of the
+		 *  previous mbuf.
+		 */
+		rxm->next = NULL;
+		if (unlikely(rxq->crc_len > 0)) {
+			first_seg->pkt_len -= ETHER_CRC_LEN;
+			if (rx_packet_len <= ETHER_CRC_LEN) {
+				rte_pktmbuf_free_seg(rxm);
+				first_seg->nb_segs--;
+				last_seg->data_len =
+					(uint16_t)(last_seg->data_len -
+					(ETHER_CRC_LEN - rx_packet_len));
+				last_seg->next = NULL;
+			} else
+				rxm->data_len = (uint16_t)(rx_packet_len -
+								ETHER_CRC_LEN);
+		}
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		avf_rxd_to_vlan_tci(first_seg, &rxd);
+		pkt_flags = avf_rxd_to_pkt_flags(qword1);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)((qword1 &
+			AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)];
+
+		if (pkt_flags & PKT_RX_RSS_HASH)
+			first_seg->hash.rss =
+				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
+
+		first_seg->ol_flags |= pkt_flags;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+
+	/* Record index of the next RX descriptor to probe. */
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	/* If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register. Update the RDT with the value of the last processed RX
+	 * descriptor minus 1, to guarantee that the RDT register is never
+	 * equal to the RDH register, which creates a "full" ring situtation
+	 * from the hardware point of view.
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id,
+			   rx_id, nb_hold, nb_rx);
+		rx_id = (uint16_t)(rx_id == 0 ?
+			(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+
+	return nb_rx;
+}
+
+static inline int
+avf_xmit_cleanup(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *sw_ring = txq->sw_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+
+	volatile struct avf_tx_desc *txd = txq->tx_ring;
+
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	if ((txd[desc_to_clean_to].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) {
+		PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done "
+				"(port=%d queue=%d)", desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		return -1;
+	}
+
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+					last_desc_cleaned);
+
+	txd[desc_to_clean_to].cmd_type_offset_bsz = 0;
+
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+	return 0;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+avf_calc_context_desc(uint64_t flags)
+{
+	static uint64_t mask = PKT_TX_TCP_SEG;
+
+	return (flags & mask) ? 1 : 0;
+}
+
+static inline void
+avf_txd_enable_checksum(uint64_t ol_flags,
+			uint32_t *td_cmd,
+			uint32_t *td_offset,
+			union avf_tx_offload tx_offload)
+{
+	/* Set MACLEN */
+	*td_offset |= (tx_offload.l2_len >> 1) <<
+		      AVF_TX_DESC_LENGTH_MACLEN_SHIFT;
+
+	/* Enable L3 checksum offloads */
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV4) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	} else if (ol_flags & PKT_TX_IPV6) {
+		*td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6;
+		*td_offset |= (tx_offload.l3_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_IPLEN_SHIFT;
+	}
+
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (tx_offload.l4_len >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		return;
+	}
+
+	/* Enable L4 checksum offloads */
+	switch (ol_flags & PKT_TX_L4_MASK) {
+	case PKT_TX_TCP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP;
+		*td_offset |= (sizeof(struct tcp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP;
+		*td_offset |= (sizeof(struct sctp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	case PKT_TX_UDP_CKSUM:
+		*td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP;
+		*td_offset |= (sizeof(struct udp_hdr) >> 2) <<
+			      AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
+		break;
+	default:
+		break;
+	}
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload)
+{
+	uint64_t ctx_desc = 0;
+	uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+	if (!tx_offload.l4_len) {
+		PMD_TX_LOG(DEBUG, "L4 length set to 0");
+		return ctx_desc;
+	}
+
+	/* in case of non tunneling packet, the outer_l2_len and
+	 * outer_l3_len must be 0.
+	 */
+	hdr_len = tx_offload.l2_len +
+		  tx_offload.l3_len +
+		  tx_offload.l4_len;
+
+	cd_cmd = AVF_TX_CTX_DESC_TSO;
+	cd_tso_len = mbuf->pkt_len - hdr_len;
+	ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) |
+		     ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) |
+		     ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT);
+
+	return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size,
+	       uint32_t td_tag)
+{
+	return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA |
+				((uint64_t)td_cmd  << AVF_TXD_QW1_CMD_SHIFT) |
+				((uint64_t)td_offset <<
+				 AVF_TXD_QW1_OFFSET_SHIFT) |
+				((uint64_t)size  <<
+				 AVF_TXD_QW1_TX_BUF_SZ_SHIFT) |
+				((uint64_t)td_tag  <<
+				 AVF_TXD_QW1_L2TAG1_SHIFT));
+}
+
+/* TX function */
+uint16_t
+avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	volatile struct avf_tx_desc *txd;
+	volatile struct avf_tx_desc *txr;
+	struct avf_tx_queue *txq;
+	struct avf_tx_entry *sw_ring;
+	struct avf_tx_entry *txe, *txn;
+	struct rte_mbuf *tx_pkt;
+	struct rte_mbuf *m_seg;
+	uint16_t tx_id;
+	uint16_t nb_tx;
+	uint32_t td_cmd;
+	uint32_t td_offset;
+	uint32_t td_tag;
+	uint64_t ol_flags;
+	uint16_t nb_used;
+	uint16_t nb_ctx;
+	uint16_t tx_last;
+	uint16_t slen;
+	uint64_t buf_dma_addr;
+	union avf_tx_offload tx_offload = {0};
+
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr = txq->tx_ring;
+	tx_id = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Check if the descriptor ring needs to be cleaned. */
+	if (txq->nb_free < txq->free_thresh)
+		avf_xmit_cleanup(txq);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		td_cmd = 0;
+		td_tag = 0;
+		td_offset = 0;
+
+		tx_pkt = *tx_pkts++;
+		RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+		ol_flags = tx_pkt->ol_flags;
+		tx_offload.l2_len = tx_pkt->l2_len;
+		tx_offload.l3_len = tx_pkt->l3_len;
+		tx_offload.l4_len = tx_pkt->l4_len;
+		tx_offload.tso_segsz = tx_pkt->tso_segsz;
+
+		/* Calculate the number of context descriptors needed. */
+		nb_ctx = avf_calc_context_desc(ol_flags);
+
+		/* The number of descriptors that must be allocated for
+		 * a packet equals to the number of the segments of that
+		 * packet plus 1 context descriptor if needed.
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+			   " tx_first=%u tx_last=%u",
+			   txq->port_id, txq->queue_id, tx_id, tx_last);
+
+		if (nb_used > txq->nb_free) {
+			if (avf_xmit_cleanup(txq)) {
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+			if (unlikely(nb_used > txq->rs_thresh)) {
+				while (nb_used > txq->nb_free) {
+					if (avf_xmit_cleanup(txq)) {
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/* Descriptor based VLAN insertion */
+		if (ol_flags & PKT_TX_VLAN_PKT) {
+			td_cmd |= AVF_TX_DESC_CMD_IL2TAG1;
+			td_tag = tx_pkt->vlan_tci;
+		}
+
+		/* According to datasheet, the bit2 is reserved and must be
+		 * set to 1.
+		 */
+		td_cmd |= 0x04;
+
+		/* Enable checksum offloading */
+		if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK)
+			avf_txd_enable_checksum(ol_flags, &td_cmd,
+						&td_offset, tx_offload);
+
+		if (nb_ctx) {
+			/* Setup TX context descriptor if required */
+			volatile struct avf_tx_context_desc *ctx_txd =
+				(volatile struct avf_tx_context_desc *)
+					&txr[tx_id];
+			uint16_t cd_l2tag2 = 0;
+			uint64_t cd_type_cmd_tso_mss =
+				AVF_TX_DESC_DTYPE_CONTEXT;
+
+			txn = &sw_ring[txe->next_id];
+			RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+			if (txe->mbuf) {
+				rte_pktmbuf_free_seg(txe->mbuf);
+				txe->mbuf = NULL;
+			}
+
+			/* TSO enabled */
+			if (ol_flags & PKT_TX_TCP_SEG)
+				cd_type_cmd_tso_mss |=
+					avf_set_tso_ctx(tx_pkt, tx_offload);
+
+			AVF_DUMP_TX_DESC(txq, ctx_txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+		}
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+
+			if (txe->mbuf)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/* Setup TX Descriptor */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+			txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd,
+								  td_offset,
+								  slen,
+								  td_tag);
+
+			AVF_DUMP_TX_DESC(txq, txd, tx_id);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg);
+
+		/* The last packet data descriptor needs End Of Packet (EOP) */
+		td_cmd |= AVF_TX_DESC_CMD_EOP;
+		txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+		txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+		if (txq->nb_used >= txq->rs_thresh) {
+			PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+				   "%4u (port=%d queue=%d)",
+				   tx_last, txq->port_id, txq->queue_id);
+
+			td_cmd |= AVF_TX_DESC_CMD_RS;
+
+			/* Update txq RS bit counters */
+			txq->nb_used = 0;
+		}
+
+		txd->cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		AVF_DUMP_TX_DESC(txq, txd, tx_id);
+	}
+
+end_of_tx:
+	rte_wmb();
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+	AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+	      uint16_t nb_pkts)
+{
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */
+		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+			if (m->nb_segs > AVF_TX_MAX_MTU_SEG) {
+				rte_errno = -EINVAL;
+				return i;
+			}
+		} else if ((m->tso_segsz < AVF_MIN_TSO_MSS) ||
+			   (m->tso_segsz > AVF_MAX_TSO_MSS)) {
+			/* MSS outside the range are considered malicious */
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
+}
+
+/* choose rx function*/
+void
+avf_set_rx_function(struct rte_eth_dev *dev)
+{
+	if (dev->data->scattered_rx)
+		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	else
+		dev->rx_pkt_burst = avf_recv_pkts;
+}
+
+/* choose tx function*/
+void
+avf_set_tx_function(struct rte_eth_dev *dev)
+{
+	dev->tx_pkt_burst = avf_xmit_pkts;
+	dev->tx_pkt_prepare = avf_prep_pkts;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e227cd1..cad240d 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -19,6 +19,25 @@
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
+#define AVF_MIN_TSO_MSS          256
+#define AVF_MAX_TSO_MSS          9668
+#define AVF_TSO_MAX_SEG          UINT8_MAX
+#define AVF_TX_MAX_MTU_SEG       8
+
+#define AVF_TX_CKSUM_OFFLOAD_MASK (		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_MASK (  \
+		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_IP_CKSUM |		 \
+		PKT_TX_L4_MASK |		 \
+		PKT_TX_TCP_SEG)
+
+#define AVF_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK)
+
 /* HW desc structure, both 16-byte and 32-byte types are supported */
 #ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC
 #define avf_rx_desc avf_16byte_rx_desc
@@ -85,6 +104,18 @@ struct avf_tx_queue {
 	bool tx_deferred_start;        /* don't start this queue in dev start */
 };
 
+/* Offload features */
+union avf_tx_offload {
+	uint64_t data;
+	struct {
+		uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /* L3 (IP) Header Length. */
+		uint64_t l4_len:8; /* L4 Header Length. */
+		uint64_t tso_segsz:16; /* TCP TSO segment size */
+		/* uint64_t unused : 24; */
+	};
+};
+
 int avf_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
@@ -105,6 +136,17 @@ int avf_dev_tx_queue_setup(struct rte_eth_dev *dev,
 int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void avf_dev_tx_queue_release(void *txq);
 void avf_stop_queues(struct rte_eth_dev *dev);
+uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts(void *rx_queue,
+				 struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts);
+uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts);
+void avf_set_rx_function(struct rte_eth_dev *dev);
+void avf_set_tx_function(struct rte_eth_dev *dev);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
@@ -157,4 +199,15 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq,
 	       txq->queue_id, name, tx_id, tx_desc->buffer_addr,
 	       tx_desc->cmd_type_offset_bsz);
 }
+
+#ifdef DEBUG_DUMP_DESC
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \
+	avf_dump_rx_descriptor(rxq, desc, rx_id)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \
+	avf_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
+
 #endif /* _AVF_RXTX_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 05/14] net/avf: enable link status update
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (3 preceding siblings ...)
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 06/14] net/avf: support stats Wenzhuo Lu
                             ` (9 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  3 +++
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 51 +++++++++++++++++++++++++++++++++++++++-
 drivers/net/avf/avf_vchnl.c      | 38 +++++++++++++++++++++++++++++-
 4 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 8a294e9..77e4f53 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -4,6 +4,9 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 22886d4..c97b2ee 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -202,4 +202,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 int avf_configure_queues(struct avf_adapter *adapter);
 int avf_config_irq_map(struct avf_adapter *adapter);
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
+int avf_dev_link_update(struct rte_eth_dev *dev,
+			__rte_unused int wait_to_complete);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 4480989..7f7ddf9 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -55,6 +55,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_close                  = avf_dev_close,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
+	.link_update                = avf_dev_link_update,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -429,6 +430,53 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	return ptypes;
 }
 
+int
+avf_dev_link_update(struct rte_eth_dev *dev,
+		    __rte_unused int wait_to_complete)
+{
+	struct rte_eth_link new_link;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	/* Only read status info stored in VF, and the info is updated
+	 *  when receive LINK_CHANGE evnet from PF by Virtchnnl.
+	 */
+	switch (vf->link_speed) {
+	case VIRTCHNL_LINK_SPEED_100MB:
+		new_link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+	case VIRTCHNL_LINK_SPEED_1GB:
+		new_link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+	case VIRTCHNL_LINK_SPEED_10GB:
+		new_link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	case VIRTCHNL_LINK_SPEED_20GB:
+		new_link.link_speed = ETH_SPEED_NUM_20G;
+		break;
+	case VIRTCHNL_LINK_SPEED_25GB:
+		new_link.link_speed = ETH_SPEED_NUM_25G;
+		break;
+	case VIRTCHNL_LINK_SPEED_40GB:
+		new_link.link_speed = ETH_SPEED_NUM_40G;
+		break;
+	default:
+		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		break;
+	}
+
+	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? ETH_LINK_UP :
+					     ETH_LINK_DOWN;
+	new_link.link_autoneg = !!(dev->data->dev_conf.link_speeds &
+				ETH_LINK_SPEED_FIXED);
+
+	rte_atomic64_cmpset((uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&dev->data->dev_link,
+			    *(uint64_t *)&new_link);
+
+	return 0;
+}
+
 static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
@@ -712,7 +760,8 @@ static int eth_avf_pci_remove(struct rte_pci_device *pci_dev)
 /* Adaptive virtual function driver struct */
 static struct rte_pci_driver rte_avf_pmd = {
 	.id_table = pci_id_avf_map,
-	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC |
+		     RTE_PCI_DRV_IOVA_AS_VA,
 	.probe = eth_avf_pci_probe,
 	.remove = eth_avf_pci_remove,
 };
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index 55a425a..f5da601 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -133,6 +133,41 @@
 	return err;
 }
 
+static void
+avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
+			uint16_t msglen)
+{
+	struct virtchnl_pf_event *pf_msg =
+			(struct virtchnl_pf_event *)msg;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
+	if (msglen < sizeof(struct virtchnl_pf_event)) {
+		PMD_DRV_LOG(DEBUG, "Error event");
+		return;
+	}
+	switch (pf_msg->event) {
+	case VIRTCHNL_EVENT_RESET_IMPENDING:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event");
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");
+		vf->link_up = pf_msg->event_data.link_event.link_status;
+		vf->link_speed = pf_msg->event_data.link_event.link_speed;
+		avf_dev_link_update(dev, 0);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL, NULL);
+		break;
+	case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", pf_msg->event);
+		break;
+	}
+}
+
 void
 avf_handle_virtchnl_msg(struct rte_eth_dev *dev)
 {
@@ -172,7 +207,8 @@
 		switch (aq_opc) {
 		case avf_aqc_opc_send_msg_to_vf:
 			if (msg_opc == VIRTCHNL_OP_EVENT) {
-				/* TODO */
+				avf_handle_pf_event_msg(dev, info.msg_buf,
+							info.msg_len);
 			} else {
 				/* read message and it's expected one */
 				if (msg_opc == vf->pend_cmd) {
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 06/14] net/avf: support stats
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (4 preceding siblings ...)
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 05/14] net/avf: enable link status update Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
                             ` (8 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf.h            |  2 ++
 drivers/net/avf/avf_ethdev.c     | 27 +++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      | 27 +++++++++++++++++++++++++++
 4 files changed, 57 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 77e4f53..af84599 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -17,6 +17,7 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
 Linux UIO            = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index c97b2ee..680b117 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -204,4 +204,6 @@ int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid,
 void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add);
 int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
+int avf_query_stats(struct avf_adapter *adapter,
+		    struct virtchnl_eth_stats **pstats);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 7f7ddf9..bf6251b 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -40,6 +40,8 @@
 static void avf_dev_info_get(struct rte_eth_dev *dev,
 			     struct rte_eth_dev_info *dev_info);
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+static int avf_dev_stats_get(struct rte_eth_dev *dev,
+			     struct rte_eth_stats *stats);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -56,6 +58,7 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 	.dev_infos_get              = avf_dev_info_get,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
+	.stats_get                  = avf_dev_stats_get,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -478,6 +481,30 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct virtchnl_eth_stats *pstats = NULL;
+	int ret;
+
+	ret = avf_query_stats(adapter, &pstats);
+	if (ret == 0) {
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+						pstats->rx_broadcast;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->obytes = pstats->tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index f5da601..e26527f 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -693,3 +693,30 @@
 		begin = next_begin;
 	} while (begin < AVF_NUM_MACADDR_MAX);
 }
+
+int
+avf_query_stats(struct avf_adapter *adapter,
+		struct virtchnl_eth_stats **pstats)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_queue_select q_stats;
+	struct avf_cmd_info args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = vf->vsi_res->vsi_id;
+	args.ops = VIRTCHNL_OP_GET_STATS;
+	args.in_args = (uint8_t *)&q_stats;
+	args.in_args_size = sizeof(q_stats);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 07/14] net/avf: enable MAC VLAN and promisc ops
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (5 preceding siblings ...)
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 06/14] net/avf: support stats Wenzhuo Lu
@ 2018-01-10 13:01           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
                             ` (7 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:01 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - promiscuous_enable
 - promiscuous_disable
 - allmulticast_enable
 - allmulticast_disable
 - mac_addr_add
 - mac_addr_remove
 - mac_addr_set
 - vlan_filter_set
 - vlan_offload_set

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   5 +
 drivers/net/avf/avf.h            |   5 +
 drivers/net/avf/avf_ethdev.c     | 219 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_vchnl.c      |  90 ++++++++++++++++
 4 files changed, 319 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index af84599..1dd6114 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -11,7 +11,12 @@ Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
 RSS hash             = Y
+VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index 680b117..ea48310 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -206,4 +206,9 @@ int avf_dev_link_update(struct rte_eth_dev *dev,
 			__rte_unused int wait_to_complete);
 int avf_query_stats(struct avf_adapter *adapter,
 		    struct virtchnl_eth_stats **pstats);
+int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast,
+		       bool enable_multicast);
+int avf_add_del_eth_addr(struct avf_adapter *adapter,
+			 struct ether_addr *addr, bool add);
+int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add);
 #endif /* _AVF_ETHDEV_H_ */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index bf6251b..1ea6ec6 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -42,6 +42,20 @@ static void avf_dev_info_get(struct rte_eth_dev *dev,
 static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 static int avf_dev_stats_get(struct rte_eth_dev *dev,
 			     struct rte_eth_stats *stats);
+static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev);
+static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev);
+static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev);
+static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
+				struct ether_addr *addr,
+				uint32_t index,
+				uint32_t pool);
+static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index);
+static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
+				   uint16_t vlan_id, int on);
+static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+					 struct ether_addr *mac_addr);
 
 int avf_logtype_init;
 int avf_logtype_driver;
@@ -59,6 +73,14 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.dev_supported_ptypes_get   = avf_dev_supported_ptypes_get,
 	.link_update                = avf_dev_link_update,
 	.stats_get                  = avf_dev_stats_get,
+	.promiscuous_enable         = avf_dev_promiscuous_enable,
+	.promiscuous_disable        = avf_dev_promiscuous_disable,
+	.allmulticast_enable        = avf_dev_allmulticast_enable,
+	.allmulticast_disable       = avf_dev_allmulticast_disable,
+	.mac_addr_add               = avf_dev_add_mac_addr,
+	.mac_addr_remove            = avf_dev_del_mac_addr,
+	.vlan_filter_set            = avf_dev_vlan_filter_set,
+	.vlan_offload_set           = avf_dev_vlan_offload_set,
 	.rx_queue_start             = avf_dev_rx_queue_start,
 	.rx_queue_stop              = avf_dev_rx_queue_stop,
 	.tx_queue_start             = avf_dev_tx_queue_start,
@@ -67,6 +89,7 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	.rx_queue_release           = avf_dev_rx_queue_release,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
+	.mac_addr_set               = avf_dev_set_default_mac_addr,
 };
 
 static int
@@ -480,6 +503,202 @@ static int avf_dev_stats_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static void
+avf_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = TRUE;
+}
+
+static void
+avf_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_unicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled);
+	if (!ret)
+		vf->promisc_unicast_enabled = FALSE;
+}
+
+static void
+avf_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE);
+	if (!ret)
+		vf->promisc_multicast_enabled = TRUE;
+}
+
+static void
+avf_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int ret;
+
+	if (!vf->promisc_multicast_enabled)
+		return;
+
+	ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE);
+	if (!ret)
+		vf->promisc_multicast_enabled = FALSE;
+}
+
+static int
+avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr,
+		     __rte_unused uint32_t index,
+		     __rte_unused uint32_t pool)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (is_zero_ether_addr(addr)) {
+		PMD_DRV_LOG(ERR, "Invalid Ethernet Address");
+		return -EINVAL;
+	}
+
+	err = avf_add_del_eth_addr(adapter, addr, TRUE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to add MAC address");
+		return -EIO;
+	}
+
+	vf->mac_num++;
+
+	return 0;
+}
+
+static void
+avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct ether_addr *addr;
+	int err;
+
+	addr = &dev->data->mac_addrs[index];
+
+	err = avf_add_del_eth_addr(adapter, addr, FALSE);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to delete MAC address");
+
+	vf->mac_num--;
+}
+
+static int
+avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	err = avf_add_del_vlan(adapter, vlan_id, on);
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static int
+avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	int err;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
+		return -ENOTSUP;
+
+	/* Vlan stripping setting */
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		/* Enable or disable VLAN stripping */
+		if (dev_conf->rxmode.hw_vlan_strip)
+			err = avf_enable_vlan_strip(adapter);
+		else
+			err = avf_disable_vlan_strip(adapter);
+	}
+
+	if (err)
+		return -EIO;
+	return 0;
+}
+
+static void
+avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
+			     struct ether_addr *mac_addr)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	struct ether_addr *perm_addr, *old_addr;
+	int ret;
+
+	old_addr = (struct ether_addr *)hw->mac.addr;
+	perm_addr = (struct ether_addr *)hw->mac.perm_addr;
+
+	if (is_same_ether_addr(mac_addr, old_addr))
+		return;
+
+	/* If the MAC address is configured by host, skip the setting */
+	if (is_valid_assigned_ether_addr(perm_addr))
+		return;
+
+	ret = avf_add_del_eth_addr(adapter, old_addr, FALSE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to delete old MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    old_addr->addr_bytes[0],
+			    old_addr->addr_bytes[1],
+			    old_addr->addr_bytes[2],
+			    old_addr->addr_bytes[3],
+			    old_addr->addr_bytes[4],
+			    old_addr->addr_bytes[5]);
+
+	ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE);
+	if (ret)
+		PMD_DRV_LOG(ERR, "Fail to add new MAC:"
+			    " %02X:%02X:%02X:%02X:%02X:%02X",
+			    mac_addr->addr_bytes[0],
+			    mac_addr->addr_bytes[1],
+			    mac_addr->addr_bytes[2],
+			    mac_addr->addr_bytes[3],
+			    mac_addr->addr_bytes[4],
+			    mac_addr->addr_bytes[5]);
+
+	ether_addr_copy(mac_addr, (struct ether_addr *)hw->mac.addr);
+}
+
 static int
 avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
diff --git a/drivers/net/avf/avf_vchnl.c b/drivers/net/avf/avf_vchnl.c
index e26527f..3b652bf 100644
--- a/drivers/net/avf/avf_vchnl.c
+++ b/drivers/net/avf/avf_vchnl.c
@@ -720,3 +720,93 @@
 	*pstats = (struct virtchnl_eth_stats *)args.out_buffer;
 	return 0;
 }
+
+int
+avf_config_promisc(struct avf_adapter *adapter,
+		   bool enable_unicast,
+		   bool enable_multicast)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct virtchnl_promisc_info promisc;
+	struct avf_cmd_info args;
+	int err;
+
+	promisc.flags = 0;
+	promisc.vsi_id = vf->vsi_res->vsi_id;
+
+	if (enable_unicast)
+		promisc.flags |= FLAG_VF_UNICAST_PROMISC;
+
+	if (enable_multicast)
+		promisc.flags |= FLAG_VF_MULTICAST_PROMISC;
+
+	args.ops = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
+	args.in_args = (uint8_t *)&promisc;
+	args.in_args_size = sizeof(promisc);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+
+	err = avf_execute_vf_cmd(adapter, &args);
+
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "fail to execute command CONFIG_PROMISCUOUS_MODE");
+	return err;
+}
+
+int
+avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr,
+		     bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) +
+			   sizeof(struct virtchnl_ether_addr)];
+	struct avf_cmd_info args;
+	int err;
+
+	list = (struct virtchnl_ether_addr_list *)cmd_buffer;
+	list->vsi_id = vf->vsi_res->vsi_id;
+	list->num_elements = 1;
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+		   sizeof(addr->addr_bytes));
+
+	args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETH_ADDR" :  "OP_DEL_ETH_ADDR");
+	return err;
+}
+
+int
+avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add)
+{
+	struct virtchnl_vlan_filter_list *vlan_list;
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) +
+							sizeof(uint16_t)];
+	struct avf_cmd_info args;
+	int err;
+
+	vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer;
+	vlan_list->vsi_id = vf->vsi_res->vsi_id;
+	vlan_list->num_elements = 1;
+	vlan_list->vlan_id[0] = vlanid;
+
+	args.ops = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN;
+	args.in_args = cmd_buffer;
+	args.in_args_size = sizeof(cmd_buffer);
+	args.out_buffer = vf->aq_resp;
+	args.out_size = AVF_AQ_BUF_SZ;
+	err = avf_execute_vf_cmd(adapter, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_VLAN" :  "OP_DEL_VLAN");
+
+	return err;
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 08/14] net/avf: enable ops for RSS setting
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (6 preceding siblings ...)
  2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
                             ` (6 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     | 142 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 144 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 1dd6114..61527d7 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -16,6 +16,8 @@ Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
 RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 CRC offload          = Y
 VLAN offload         = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 1ea6ec6..5a800ff 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -54,6 +54,16 @@ static int avf_dev_add_mac_addr(struct rte_eth_dev *dev,
 static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev,
 				   uint16_t vlan_id, int on);
 static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_reta_entry64 *reta_conf,
+				   uint16_t reta_size);
+static int avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+				  struct rte_eth_rss_reta_entry64 *reta_conf,
+				  uint16_t reta_size);
+static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+				   struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+				     struct rte_eth_rss_conf *rss_conf);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -90,6 +100,10 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.tx_queue_setup             = avf_dev_tx_queue_setup,
 	.tx_queue_release           = avf_dev_tx_queue_release,
 	.mac_addr_set               = avf_dev_set_default_mac_addr,
+	.reta_update                = avf_dev_rss_reta_update,
+	.reta_query                 = avf_dev_rss_reta_query,
+	.rss_hash_update            = avf_dev_rss_hash_update,
+	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
 };
 
 static int
@@ -654,6 +668,134 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_rss_reta_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_reta_entry64 *reta_conf,
+			uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint8_t *lut;
+	uint16_t i, idx, shift;
+	int ret;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	lut = rte_zmalloc("rss_lut", reta_size, 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vf->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vf->rss_lut, lut, reta_size);
+	/* send virtchnnl ops to configure rss*/
+	ret = avf_configure_rss_lut(adapter);
+	if (ret) /* revert back */
+		rte_memcpy(vf->rss_lut, lut, reta_size);
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+avf_dev_rss_reta_query(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	uint16_t i, idx, shift;
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	if (reta_size != vf->vf_res->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vf->vf_res->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		shift = i % RTE_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vf->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+avf_dev_rss_hash_update(struct rte_eth_dev *dev,
+			struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	/* HENA setting, it is enabled by default, no change */
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		return 0;
+	} else if (rss_conf->rss_key_len != vf->vf_res->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+			"(%d) doesn't match the size of hardware can "
+			"support (%d)", rss_conf->rss_key_len,
+			vf->vf_res->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	return avf_configure_rss_key(adapter);
+}
+
+static int
+avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+			  struct rte_eth_rss_conf *rss_conf)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+
+	if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF))
+		return -ENOTSUP;
+
+	 /* Just set it to default value now. */
+	rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL;
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	rss_conf->rss_key_len = vf->vf_res->rss_key_size;
+	rte_memcpy(rss_conf->rss_key, vf->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 09/14] net/avf: enable ops for MTU setting
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (7 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
                             ` (5 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |  1 +
 drivers/net/avf/avf_ethdev.c     | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index 61527d7..cf1b246 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -8,6 +8,7 @@ Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
 Queue start/stop     = Y
+MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 5a800ff..e4a6f35 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -64,6 +64,7 @@ static int avf_dev_rss_hash_update(struct rte_eth_dev *dev,
 				   struct rte_eth_rss_conf *rss_conf);
 static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 				     struct rte_eth_rss_conf *rss_conf);
+static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
 
@@ -104,6 +105,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.mtu_set                    = avf_dev_mtu_set,
 };
 
 static int
@@ -796,6 +798,34 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+	uint32_t frame_size = mtu + AVF_ETH_OVERHEAD;
+	int ret = 0;
+
+	if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX)
+		return -EINVAL;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (frame_size > ETHER_MAX_LEN)
+		dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_JUMBO_FRAME;
+	else
+		dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	return ret;
+}
+
 static void
 avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 			     struct ether_addr *mac_addr)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 10/14] net/avf: enable ops to check queue info and status
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (8 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
                             ` (4 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

 - rxq_info_get
 - txq_info_get
 - rx_queue_count
 - rx_descriptor_status
 - tx_descriptor_status

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini |   2 +
 drivers/net/avf/avf_ethdev.c     |   5 ++
 drivers/net/avf/avf_rxtx.c       | 120 +++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h       |   7 +++
 4 files changed, 134 insertions(+)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index cf1b246..da4d81b 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -25,6 +25,8 @@ VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
 Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
 Basic stats          = Y
 Multiprocess aware   = Y
 BSD nic_uio          = Y
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e4a6f35..e00bb5d 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -105,6 +105,11 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.reta_query                 = avf_dev_rss_reta_query,
 	.rss_hash_update            = avf_dev_rss_hash_update,
 	.rss_hash_conf_get          = avf_dev_rss_hash_conf_get,
+	.rxq_info_get               = avf_dev_rxq_info_get,
+	.txq_info_get               = avf_dev_txq_info_get,
+	.rx_queue_count             = avf_dev_rxq_count,
+	.rx_descriptor_status       = avf_dev_rx_desc_status,
+	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
 };
 
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index baccec4..0fea8f9 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -1385,3 +1385,123 @@
 	dev->tx_pkt_burst = avf_xmit_pkts;
 	dev->tx_pkt_prepare = avf_prep_pkts;
 }
+
+void
+avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct avf_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = TRUE;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+}
+
+void
+avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct avf_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = txq->free_thresh;
+	qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+	qinfo->conf.txq_flags = txq->txq_flags;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
+/* Get the number of used descriptors of a rx queue */
+uint32_t
+avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+#define AVF_RXQ_SCAN_INTERVAL 4
+	volatile union avf_rx_desc *rxdp;
+	struct avf_rx_queue *rxq;
+	uint16_t desc = 0;
+
+	rxq = dev->data->rx_queues[queue_id];
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	while ((desc < rxq->nb_rx_desc) &&
+	       ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+		 AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) &
+	       (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) {
+		/* Check the DD bit of a rx descriptor of each 4 in a group,
+		 * to avoid checking too frequently and downgrading performance
+		 * too much.
+		 */
+		desc += AVF_RXQ_SCAN_INTERVAL;
+		rxdp += AVF_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+					desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+avf_dev_rx_desc_status(void *rx_queue, uint16_t offset)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	volatile uint64_t *status;
+	uint64_t mask;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].wb.qword1.status_error_len;
+	mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT)
+		<< AVF_RXD_QW1_STATUS_SHIFT);
+	if (*status & mask)
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+avf_dev_tx_desc_status(void *tx_queue, uint16_t offset)
+{
+	struct avf_tx_queue *txq = tx_queue;
+	volatile uint64_t *status;
+	uint64_t mask, expect;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	/* go to next desc that has the RS bit */
+	desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) *
+		txq->rs_thresh;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].cmd_type_offset_bsz;
+	mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK);
+	expect = rte_cpu_to_le_64(
+		 AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT);
+	if ((*status & mask) == expect)
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index cad240d..e248f55 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -147,6 +147,13 @@ uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void avf_set_rx_function(struct rte_eth_dev *dev);
 void avf_set_tx_function(struct rte_eth_dev *dev);
+void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_rxq_info *qinfo);
+void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			  struct rte_eth_txq_info *qinfo);
+uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
+int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 11/14] net/i40e: support AVF basic interface
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (9 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
                             ` (3 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Enable Virtchnl offload Caps negotiation and RSS_PF offload
to support AVF basic interface.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  69 ++++++++++++++++----
 drivers/net/i40e/i40e_ethdev.h |   5 ++
 drivers/net/i40e/i40e_pf.c     | 140 +++++++++++++++++++++++++++++++++++++----
 drivers/net/i40e/i40e_pf.h     |   6 ++
 4 files changed, 195 insertions(+), 25 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 285d92b..10bb4eb 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3649,6 +3649,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!lut)
@@ -3665,14 +3666,22 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			lut_dw[i] = I40E_READ_REG(hw, I40E_PFQF_HLUT(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= lut_size_dw; i++) {
+				reg = I40E_VFQF_HLUT1(i, vsi->user_param);
+				lut_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				lut_dw[i] = I40E_READ_REG(hw,
+							  I40E_PFQF_HLUT(i));
+		}
 	}
 
 	return 0;
 }
 
-static int
+int
 i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
 	struct i40e_pf *pf;
@@ -3696,8 +3705,17 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 		uint32_t *lut_dw = (uint32_t *)lut;
 		uint16_t i, lut_size_dw = lut_size / 4;
 
-		for (i = 0; i < lut_size_dw; i++)
-			I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HLUT1(i, vsi->user_param),
+					lut_dw[i]);
+		} else {
+			for (i = 0; i < lut_size_dw; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i),
+					       lut_dw[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6669,17 +6687,20 @@ struct i40e_vsi *
 	I40E_WRITE_FLUSH(hw);
 }
 
-static int
+int
 i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len)
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint16_t key_idx = (vsi->type == I40E_VSI_SRIOV) ?
+			   I40E_VFQF_HKEY_MAX_INDEX :
+			   I40E_PFQF_HKEY_MAX_INDEX;
 	int ret = 0;
 
 	if (!key || key_len == 0) {
 		PMD_DRV_LOG(DEBUG, "No key to be configured");
 		return 0;
-	} else if (key_len != (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+	} else if (key_len != (key_idx + 1) *
 		sizeof(uint32_t)) {
 		PMD_DRV_LOG(ERR, "Invalid key length %u", key_len);
 		return -EINVAL;
@@ -6696,8 +6717,18 @@ struct i40e_vsi *
 		uint32_t *hash_key = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), hash_key[i]);
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(
+					hw,
+					I40E_VFQF_HKEY1(i, vsi->user_param),
+					hash_key[i]);
+
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+				I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i),
+					       hash_key[i]);
+		}
 		I40E_WRITE_FLUSH(hw);
 	}
 
@@ -6709,6 +6740,7 @@ struct i40e_vsi *
 {
 	struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
 	struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
+	uint32_t reg;
 	int ret;
 
 	if (!key || !key_len)
@@ -6725,11 +6757,22 @@ struct i40e_vsi *
 		uint32_t *key_dw = (uint32_t *)key;
 		uint16_t i;
 
-		for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
-			key_dw[i] = i40e_read_rx_ctl(hw, I40E_PFQF_HKEY(i));
+		if (vsi->type == I40E_VSI_SRIOV) {
+			for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_VFQF_HKEY1(i, vsi->user_param);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		} else {
+			for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) {
+				reg = I40E_PFQF_HKEY(i);
+				key_dw[i] = i40e_read_rx_ctl(hw, reg);
+			}
+			*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
+				   sizeof(uint32_t);
+		}
 	}
-	*key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-
 	return 0;
 }
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index f2b4b70..de2797e 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -397,6 +397,9 @@ struct i40e_pf_vf {
 	uint16_t lan_nb_qps; /* Actual queues allocated */
 	uint16_t reset_cnt; /* Total vf reset times */
 	struct ether_addr mac_addr;  /* Default MAC address */
+	/* version of the virtchnl from VF */
+	struct virtchnl_version_info version;
+	uint32_t request_caps; /* offload caps requested from VF */
 };
 
 /*
@@ -1169,6 +1172,8 @@ void i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
 int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
 		struct i40e_hw *hw, struct i40e_pf *pf, uint16_t on);
 void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
+int i40e_set_rss_key(struct i40e_vsi *vsi, uint8_t *key, uint8_t key_len);
+int i40e_set_rss_lut(struct i40e_vsi *vsi, uint8_t *lut, uint16_t lut_size);
 
 #define I40E_DEV_TO_PCI(eth_dev) \
 	RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 1bca250..7508444 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -244,19 +244,23 @@
 }
 
 static void
-i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf, uint8_t *msg,
+				 bool b_op)
 {
 	struct virtchnl_version_info info;
 
-	/* Respond like a Linux PF host in order to support both DPDK VF and
-	 * Linux VF driver. The expense is original DPDK host specific feature
+	/* VF and PF drivers need to follow the Virtchnl definition, No matter
+	 * it's DPDK or other kernel drivers.
+	 * The original DPDK host specific feature
 	 * like CFG_VLAN_PVID and CONFIG_VSI_QUEUES_EXT will not available.
-	 *
-	 * DPDK VF also can't identify host driver by version number returned.
-	 * It always assume talking with Linux PF.
 	 */
+
 	info.major = VIRTCHNL_VERSION_MAJOR;
-	info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	vf->version = *(struct virtchnl_version_info *)msg;
+	if (VF_IS_V10(&vf->version))
+		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
+	else
+		info.minor = VIRTCHNL_VERSION_MINOR;
 
 	if (b_op)
 		i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
@@ -280,11 +284,13 @@
 }
 
 static int
-i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, bool b_op)
+i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
+					 bool b_op)
 {
 	struct virtchnl_vf_resource *vf_res = NULL;
 	struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
 	uint32_t len = 0;
+	uint64_t default_hena = I40E_RSS_HENA_ALL;
 	int ret = I40E_SUCCESS;
 
 	if (!b_op) {
@@ -308,11 +314,35 @@
 		goto send_msg;
 	}
 
-	vf_res->vf_offload_flags = VIRTCHNL_VF_OFFLOAD_L2 |
-				VIRTCHNL_VF_OFFLOAD_VLAN;
+	if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
+		vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+				   VIRTCHNL_VF_OFFLOAD_VLAN;
+	else
+		vf->request_caps = *(uint32_t *)msg;
+
+	/* enable all RSS by default,
+	 * doesn't support hena setting by virtchnnl yet.
+	 */
+	if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx),
+			       (uint32_t)default_hena);
+		I40E_WRITE_REG(hw, I40E_VFQF_HENA1(1, vf->vf_idx),
+			       (uint32_t)(default_hena >> 32));
+		I40E_WRITE_FLUSH(hw);
+	}
+
+	vf_res->vf_offload_flags = vf->request_caps &
+				   I40E_VIRTCHNL_OFFLOAD_CAPS;
+	/* For X722, it supports write back on ITR
+	 * without binding queue to interrupt vector.
+	 */
+	if (hw->mac.type == I40E_MAC_X722)
+		vf_res->vf_offload_flags |= VIRTCHNL_VF_OFFLOAD_WB_ON_ITR;
 	vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf;
 	vf_res->num_queue_pairs = vf->vsi->nb_qps;
 	vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM;
+	vf_res->rss_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * 4;
+	vf_res->rss_lut_size = (I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4;
 
 	/* Change below setting if PF host can support more VSIs for VF */
 	vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
@@ -1061,6 +1091,84 @@
 	return ret;
 }
 
+static int
+i40e_pf_host_process_cmd_set_rss_lut(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_lut *rss_lut = (struct virtchnl_rss_lut *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_CONFIG_RSS_LUT,
+			I40E_NOT_SUPPORTED, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_lut)) {
+		PMD_DRV_LOG(ERR, "set_rss_lut argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_lut) + rss_lut->lut_entries - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_lut length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_lut(vf->vsi, rss_lut->lut, rss_lut->lut_entries);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+i40e_pf_host_process_cmd_set_rss_key(struct i40e_pf_vf *vf,
+				     uint8_t *msg,
+				     uint16_t msglen,
+				     bool b_op)
+{
+	struct virtchnl_rss_key *rss_key = (struct virtchnl_rss_key *)msg;
+	uint16_t valid_len;
+	int ret = I40E_SUCCESS;
+
+	if (!b_op) {
+		i40e_pf_host_send_msg_to_vf(
+			vf,
+			VIRTCHNL_OP_DEL_VLAN,
+			VIRTCHNL_OP_CONFIG_RSS_KEY, NULL, 0);
+		return ret;
+	}
+
+	if (!msg || msglen <= sizeof(struct virtchnl_rss_key)) {
+		PMD_DRV_LOG(ERR, "set_rss_key argument too short");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+	valid_len = sizeof(struct virtchnl_rss_key) + rss_key->key_len - 1;
+	if (msglen < valid_len) {
+		PMD_DRV_LOG(ERR, "set_rss_key length mismatch");
+		ret = I40E_ERR_PARAM;
+		goto send_msg;
+	}
+
+	ret = i40e_set_rss_key(vf->vsi, rss_key->key, rss_key->key_len);
+
+send_msg:
+	i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY,
+				    ret, NULL, 0);
+
+	return ret;
+}
+
 void
 i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 {
@@ -1167,7 +1275,7 @@
 	switch (opcode) {
 	case VIRTCHNL_OP_VERSION:
 		PMD_DRV_LOG(INFO, "OP_VERSION received");
-		i40e_pf_host_process_cmd_version(vf, b_op);
+		i40e_pf_host_process_cmd_version(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_RESET_VF:
 		PMD_DRV_LOG(INFO, "OP_RESET_VF received");
@@ -1175,7 +1283,7 @@
 		break;
 	case VIRTCHNL_OP_GET_VF_RESOURCES:
 		PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received");
-		i40e_pf_host_process_cmd_get_vf_resource(vf, b_op);
+		i40e_pf_host_process_cmd_get_vf_resource(vf, msg, b_op);
 		break;
 	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
 		PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
@@ -1236,6 +1344,14 @@
 		PMD_DRV_LOG(INFO, "OP_DISABLE_VLAN_STRIPPING received");
 		i40e_pf_host_process_cmd_disable_vlan_strip(vf, b_op);
 		break;
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_LUT received");
+		i40e_pf_host_process_cmd_set_rss_lut(vf, msg, msglen, b_op);
+		break;
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+		PMD_DRV_LOG(INFO, "OP_CONFIG_RSS_KEY received");
+		i40e_pf_host_process_cmd_set_rss_key(vf, msg, msglen, b_op);
+		break;
 	/* Don't add command supported below, which will
 	 * return an error code.
 	 */
diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h
index 429f347..1809ba4 100644
--- a/drivers/net/i40e/i40e_pf.h
+++ b/drivers/net/i40e/i40e_pf.h
@@ -8,6 +8,12 @@
 /* Default setting on number of VSIs that VF can contain */
 #define I40E_DEFAULT_VF_VSI_NUM 1
 
+#define I40E_VIRTCHNL_OFFLOAD_CAPS ( \
+	VIRTCHNL_VF_OFFLOAD_L2 | \
+	VIRTCHNL_VF_OFFLOAD_VLAN | \
+	VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+
 struct virtchnl_vlan_offload_info {
 	uint16_t vsi_id;
 	uint8_t enable_vlan_strip;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 12/14] net/avf: enable sse vector Rx Tx func
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (10 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
                             ` (2 subsequent siblings)
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 config/common_base                    |   1 +
 doc/guides/nics/features/avf_vec.ini  |  36 ++
 drivers/net/avf/Makefile              |   3 +
 drivers/net/avf/avf.h                 |   4 +
 drivers/net/avf/avf_ethdev.c          |  11 +
 drivers/net/avf/avf_rxtx.c            | 172 ++++++++-
 drivers/net/avf/avf_rxtx.h            |  36 +-
 drivers/net/avf/avf_rxtx_vec_common.h | 210 +++++++++++
 drivers/net/avf/avf_rxtx_vec_sse.c    | 656 ++++++++++++++++++++++++++++++++++
 9 files changed, 1118 insertions(+), 11 deletions(-)
 create mode 100644 doc/guides/nics/features/avf_vec.ini
 create mode 100644 drivers/net/avf/avf_rxtx_vec_common.h
 create mode 100644 drivers/net/avf/avf_rxtx_vec_sse.c

diff --git a/config/common_base b/config/common_base
index b1f1c1c..f9363ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -229,6 +229,7 @@ CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y
 # Compile burst-oriented AVF PMD driver
 #
 CONFIG_RTE_LIBRTE_AVF_PMD=y
+CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n
 CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
new file mode 100644
index 0000000..45dd5e5
--- /dev/null
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -0,0 +1,36 @@
+;
+; Supported features of the 'avf_vec' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+VLAN filter          = Y
+CRC offload          = Y
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+Packet type parsing  = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/drivers/net/avf/Makefile b/drivers/net/avf/Makefile
index 8d54fc9..5964230 100644
--- a/drivers/net/avf/Makefile
+++ b/drivers/net/avf/Makefile
@@ -47,5 +47,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_common.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_vchnl.c
 SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += avf_rxtx.c
+ifeq ($(CONFIG_RTE_ARCH_x86), y)
+SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += avf_rxtx_vec_sse.c
+endif
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index ea48310..b79bc5a 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -119,6 +119,10 @@ struct avf_adapter {
 	struct avf_hw hw;
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
+
+	/* For vector PMD */
+	bool rx_vec_allowed;
+	bool tx_vec_allowed;
 };
 
 /* AVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index e00bb5d..127fdb5 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,17 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	/* Initialize to TRUE. If any of Rx queues doesn't meet the
+	 * vector Rx/Tx preconditions, it will be reset.
+	 */
+	ad->rx_vec_allowed = true;
+	ad->tx_vec_allowed = true;
+#else
+	ad->rx_vec_allowed = false;
+	ad->tx_vec_allowed = false;
+#endif
+
 	/* Vlan stripping setting */
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) {
 		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index 0fea8f9..b542532 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -92,6 +92,34 @@
 	return 0;
 }
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+static inline bool
+check_rx_vec_allow(struct avf_rx_queue *rxq)
+{
+	if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST &&
+	    rxq->nb_rx_desc % rxq->rx_free_thresh == 0) {
+		PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq.");
+		return TRUE;
+	}
+
+	PMD_INIT_LOG(DEBUG, "Vector Rx cannot be enabled on this rxq.");
+	return FALSE;
+}
+
+static inline bool
+check_tx_vec_allow(struct avf_tx_queue *txq)
+{
+	if ((txq->txq_flags & AVF_SIMPLE_FLAGS) == AVF_SIMPLE_FLAGS &&
+	    txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST &&
+	    txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) {
+		PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq.");
+		return TRUE;
+	}
+	PMD_INIT_LOG(DEBUG, "Vector Tx cannot be enabled on this txq.");
+	return FALSE;
+}
+#endif
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -225,6 +253,14 @@
 	}
 }
 
+static const struct avf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct avf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 int
 avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
@@ -325,7 +361,12 @@
 	rxq->q_set = TRUE;
 	dev->data->rx_queues[queue_idx] = rxq;
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
+	rxq->ops = &def_rxq_ops;
 
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_rx_vec_allow(rxq) == FALSE)
+		ad->rx_vec_allowed = false;
+#endif
 	return 0;
 }
 
@@ -337,6 +378,8 @@
 		       const struct rte_eth_txconf *tx_conf)
 {
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct avf_adapter *ad =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_tx_queue *txq;
 	const struct rte_memzone *mz;
 	uint32_t ring_size;
@@ -416,6 +459,12 @@
 	txq->q_set = TRUE;
 	dev->data->tx_queues[queue_idx] = txq;
 	txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx);
+	txq->ops = &def_txq_ops;
+
+#ifdef RTE_LIBRTE_AVF_INC_VECTOR
+	if (check_tx_vec_allow(txq) == FALSE)
+		ad->tx_vec_allowed = false;
+#endif
 
 	return 0;
 }
@@ -514,7 +563,7 @@
 	}
 
 	rxq = dev->data->rx_queues[rx_queue_id];
-	release_rxq_mbufs(rxq);
+	rxq->ops->release_mbufs(rxq);
 	reset_rx_queue(rxq);
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -542,7 +591,7 @@
 	}
 
 	txq = dev->data->tx_queues[tx_queue_id];
-	release_txq_mbufs(txq);
+	txq->ops->release_mbufs(txq);
 	reset_tx_queue(txq);
 	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -557,7 +606,7 @@
 	if (!q)
 		return;
 
-	release_rxq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -571,7 +620,7 @@
 	if (!q)
 		return;
 
-	release_txq_mbufs(q);
+	q->ops->release_mbufs(q);
 	rte_free(q->sw_ring);
 	rte_memzone_free(q->mz);
 	rte_free(q);
@@ -595,7 +644,7 @@
 		txq = dev->data->tx_queues[i];
 		if (!txq)
 			continue;
-		release_txq_mbufs(txq);
+		txq->ops->release_mbufs(txq);
 		reset_tx_queue(txq);
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -603,7 +652,7 @@
 		rxq = dev->data->rx_queues[i];
 		if (!rxq)
 			continue;
-		release_rxq_mbufs(rxq);
+		rxq->ops->release_mbufs(rxq);
 		reset_rx_queue(rxq);
 		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 	}
@@ -1320,6 +1369,27 @@
 	return nb_tx;
 }
 
+static uint16_t
+avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+		  uint16_t nb_pkts)
+{
+	uint16_t nb_tx = 0;
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+
+	while (nb_pkts) {
+		uint16_t ret, num;
+
+		num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh);
+		ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num);
+		nb_tx += ret;
+		nb_pkts -= ret;
+		if (ret < num)
+			break;
+	}
+
+	return nb_tx;
+}
+
 /* TX prep functions */
 uint16_t
 avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -1372,18 +1442,64 @@
 void
 avf_set_rx_function(struct rte_eth_dev *dev)
 {
-	if (dev->data->scattered_rx)
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_rx_queue *rxq;
+	int i;
+
+	if (adapter->rx_vec_allowed) {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_scattered_pkts_vec;
+		} else {
+			PMD_DRV_LOG(DEBUG, "Using Vector Rx callback"
+				    " (port=%d).", dev->data->port_id);
+			dev->rx_pkt_burst = avf_recv_pkts_vec;
+		}
+		for (i = 0; i < dev->data->nb_rx_queues; i++) {
+			rxq = dev->data->rx_queues[i];
+			if (!rxq)
+				continue;
+			avf_rxq_vec_setup(rxq);
+		}
+	} else if (dev->data->scattered_rx) {
+		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
-	else
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
+			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_pkts;
+	}
 }
 
 /* choose tx function*/
 void
 avf_set_tx_function(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = avf_xmit_pkts;
-	dev->tx_pkt_prepare = avf_prep_pkts;
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_tx_queue *txq;
+	int i;
+
+	if (adapter->tx_vec_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts_vec;
+		dev->tx_pkt_prepare = NULL;
+		for (i = 0; i < dev->data->nb_tx_queues; i++) {
+			txq = dev->data->tx_queues[i];
+			if (!txq)
+				continue;
+			avf_txq_vec_setup(txq);
+		}
+	} else {
+		PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = avf_xmit_pkts;
+		dev->tx_pkt_prepare = avf_prep_pkts;
+	}
 }
 
 void
@@ -1505,3 +1621,39 @@
 
 	return RTE_ETH_TX_DESC_FULL;
 }
+
+uint16_t __attribute__((weak))
+avf_recv_pkts_vec(__rte_unused void *rx_queue,
+		  __rte_unused struct rte_mbuf **rx_pkts,
+		  __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue,
+			    __rte_unused struct rte_mbuf **rx_pkts,
+			    __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+uint16_t __attribute__((weak))
+avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue,
+			 __rte_unused struct rte_mbuf **tx_pkts,
+			 __rte_unused uint16_t nb_pkts)
+{
+	return 0;
+}
+
+int __attribute__((weak))
+avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq)
+{
+	return -1;
+}
+
+int __attribute__((weak))
+avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq)
+{
+	return -1;
+}
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index e248f55..82fd801 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -16,6 +16,15 @@
 /* used for Rx Bulk Allocate */
 #define AVF_RX_MAX_BURST         32
 
+/* used for Vector PMD */
+#define AVF_VPMD_RX_MAX_BURST    32
+#define AVF_VPMD_TX_MAX_BURST    32
+#define AVF_VPMD_DESCS_PER_LOOP  4
+#define AVF_VPMD_TX_MAX_FREE_BUF 64
+
+#define AVF_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \
+			  ETH_TXQ_FLAGS_NOOFFLOADS)
+
 #define DEFAULT_TX_RS_THRESH     32
 #define DEFAULT_TX_FREE_THRESH   32
 
@@ -45,6 +54,14 @@
 #define avf_rx_desc avf_32byte_rx_desc
 #endif
 
+struct avf_rxq_ops {
+	void (*release_mbufs)(struct avf_rx_queue *rxq);
+};
+
+struct avf_txq_ops {
+	void (*release_mbufs)(struct avf_tx_queue *txq);
+};
+
 /* Structure associated with each Rx queue. */
 struct avf_rx_queue {
 	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
@@ -61,7 +78,12 @@ struct avf_rx_queue {
 	struct rte_mbuf *pkt_last_seg;  /* last segment of current packet */
 	struct rte_mbuf fake_mbuf;      /* dummy mbuf */
 
-	uint16_t port_id;       /* device port ID */
+	/* used for VPMD */
+	uint16_t rxrearm_nb;       /* number of remaining to be re-armed */
+	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
+	uint64_t mbuf_initializer; /* value to init mbufs */
+
+	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
 	uint16_t rx_buf_len;    /* The packet buffer size */
@@ -70,6 +92,7 @@ struct avf_rx_queue {
 
 	bool q_set;             /* if rx queue has been configured */
 	bool rx_deferred_start; /* don't start this queue in dev start */
+	const struct avf_rxq_ops *ops;
 };
 
 struct avf_tx_entry {
@@ -102,6 +125,7 @@ struct avf_tx_queue {
 
 	bool q_set;                    /* if rx queue has been configured */
 	bool tx_deferred_start;        /* don't start this queue in dev start */
+	const struct avf_txq_ops *ops;
 };
 
 /* Offload features */
@@ -155,6 +179,16 @@ void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
+uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts);
+uint16_t avf_recv_scattered_pkts_vec(void *rx_queue,
+				     struct rte_mbuf **rx_pkts,
+				     uint16_t nb_pkts);
+uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+				  uint16_t nb_pkts);
+int avf_rxq_vec_setup(struct avf_rx_queue *rxq);
+int avf_txq_vec_setup(struct avf_tx_queue *txq);
+
 static inline
 void avf_dump_rx_descriptor(struct avf_rx_queue *rxq,
 			    const void *desc,
diff --git a/drivers/net/avf/avf_rxtx_vec_common.h b/drivers/net/avf/avf_rxtx_vec_common.h
new file mode 100644
index 0000000..56a23a7
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_common.h
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#ifndef _AVF_RXTX_VEC_COMMON_H_
+#define _AVF_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "avf.h"
+#include "avf_rxtx.h"
+
+static inline uint16_t
+reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs,
+		   uint16_t nb_bufs, uint8_t *split_flags)
+{
+	struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST];
+	struct rte_mbuf *start = rxq->pkt_first_seg;
+	struct rte_mbuf *end =  rxq->pkt_last_seg;
+	unsigned int pkt_idx, buf_idx;
+
+	for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) {
+		if (end) {
+			/* processing a split packet */
+			end->next = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+
+			start->nb_segs++;
+			start->pkt_len += rx_bufs[buf_idx]->data_len;
+			end = end->next;
+
+			if (!split_flags[buf_idx]) {
+				/* it's the last packet of the set */
+				start->hash = end->hash;
+				start->ol_flags = end->ol_flags;
+				/* we need to strip crc for the whole packet */
+				start->pkt_len -= rxq->crc_len;
+				if (end->data_len > rxq->crc_len) {
+					end->data_len -= rxq->crc_len;
+				} else {
+					/* free up last mbuf */
+					struct rte_mbuf *secondlast = start;
+
+					start->nb_segs--;
+					while (secondlast->next != end)
+						secondlast = secondlast->next;
+					secondlast->data_len -= (rxq->crc_len -
+							end->data_len);
+					secondlast->next = NULL;
+					rte_pktmbuf_free_seg(end);
+				}
+				pkts[pkt_idx++] = start;
+				start = NULL;
+				end = NULL;
+			}
+		} else {
+			/* not processing a split packet */
+			if (!split_flags[buf_idx]) {
+				/* not a split packet, save and skip */
+				pkts[pkt_idx++] = rx_bufs[buf_idx];
+				continue;
+			}
+			end = start = rx_bufs[buf_idx];
+			rx_bufs[buf_idx]->data_len += rxq->crc_len;
+			rx_bufs[buf_idx]->pkt_len += rxq->crc_len;
+		}
+	}
+
+	/* save the partial packet for next time */
+	rxq->pkt_first_seg = start;
+	rxq->pkt_last_seg = end;
+	memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts)));
+	return pkt_idx;
+}
+
+static __rte_always_inline int
+avf_tx_free_bufs(struct avf_tx_queue *txq)
+{
+	struct avf_tx_entry *txep;
+	uint32_t n;
+	uint32_t i;
+	int nb_free = 0;
+	struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF];
+
+	/* check DD bits on threshold descriptor */
+	if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz &
+			rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) !=
+			rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE))
+		return 0;
+
+	n = txq->rs_thresh;
+
+	 /* first buffer to free from S/W ring is at index
+	  * tx_next_dd - (tx_rs_thresh-1)
+	  */
+	txep = &txq->sw_ring[txq->next_dd - (n - 1)];
+	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
+	if (likely(m != NULL)) {
+		free[0] = m;
+		nb_free = 1;
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (likely(m != NULL)) {
+				if (likely(m->pool == free[0]->pool)) {
+					free[nb_free++] = m;
+				} else {
+					rte_mempool_put_bulk(free[0]->pool,
+							     (void *)free,
+							     nb_free);
+					free[0] = m;
+					nb_free = 1;
+				}
+			}
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+	} else {
+		for (i = 1; i < n; i++) {
+			m = rte_pktmbuf_prefree_seg(txep[i].mbuf);
+			if (m)
+				rte_mempool_put(m->pool, m);
+		}
+	}
+
+	/* buffers were freed, update counters */
+	txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh);
+	txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh);
+	if (txq->next_dd >= txq->nb_tx_desc)
+		txq->next_dd = (uint16_t)(txq->rs_thresh - 1);
+
+	return txq->rs_thresh;
+}
+
+static __rte_always_inline void
+tx_backlog_entry(struct avf_tx_entry *txep,
+		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	int i;
+
+	for (i = 0; i < (int)nb_pkts; ++i)
+		txep[i].mbuf = tx_pkts[i];
+}
+
+static inline void
+_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq)
+{
+	const unsigned int mask = rxq->nb_rx_desc - 1;
+	unsigned int i;
+
+	if (!rxq->sw_ring || rxq->rxrearm_nb >= rxq->nb_rx_desc)
+		return;
+
+	/* free all mbufs that are valid in the ring */
+	if (rxq->rxrearm_nb == 0) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	} else {
+		for (i = rxq->rx_tail;
+		     i != rxq->rxrearm_start;
+		     i = (i + 1) & mask) {
+			if (rxq->sw_ring[i])
+				rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+		}
+	}
+
+	rxq->rxrearm_nb = rxq->nb_rx_desc;
+
+	/* set all entries to NULL */
+	memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc);
+}
+
+static inline void
+_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq)
+{
+	unsigned i;
+	const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1);
+
+	if (!txq->sw_ring || txq->nb_free == max_desc)
+		return;
+
+	i = txq->next_dd - txq->rs_thresh + 1;
+	if (txq->tx_tail < i) {
+		for (; i < txq->nb_tx_desc; i++) {
+			rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+			txq->sw_ring[i].mbuf = NULL;
+		}
+		i = 0;
+	}
+}
+
+static inline int
+avf_rxq_vec_setup_default(struct avf_rx_queue *rxq)
+{
+	uintptr_t p;
+	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
+
+	mb_def.nb_segs = 1;
+	mb_def.data_off = RTE_PKTMBUF_HEADROOM;
+	mb_def.port = rxq->port_id;
+	rte_mbuf_refcnt_set(&mb_def, 1);
+
+	/* prevent compiler reordering: rearm_data covers previous fields */
+	rte_compiler_barrier();
+	p = (uintptr_t)&mb_def.rearm_data;
+	rxq->mbuf_initializer = *(uint64_t *)p;
+	return 0;
+}
+#endif
diff --git a/drivers/net/avf/avf_rxtx_vec_sse.c b/drivers/net/avf/avf_rxtx_vec_sse.c
new file mode 100644
index 0000000..8f389f3
--- /dev/null
+++ b/drivers/net/avf/avf_rxtx_vec_sse.c
@@ -0,0 +1,656 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+
+#include "base/avf_prototype.h"
+#include "base/avf_type.h"
+#include "avf.h"
+#include "avf_rxtx.h"
+#include "avf_rxtx_vec_common.h"
+
+#include <tmmintrin.h>
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+static inline void
+avf_rxq_rearm(struct avf_rx_queue *rxq)
+{
+	int i;
+	uint16_t rx_id;
+
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start];
+	struct rte_mbuf *mb0, *mb1;
+	__m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+			RTE_PKTMBUF_HEADROOM);
+	__m128i dma_addr0, dma_addr1;
+
+	rxdp = rxq->rx_ring + rxq->rxrearm_start;
+
+	/* Pull 'n' more MBUFs into the software ring */
+	if (rte_mempool_get_bulk(rxq->mp, (void *)rxp,
+				 rxq->rx_free_thresh) < 0) {
+		if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) {
+			dma_addr0 = _mm_setzero_si128();
+			for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) {
+				rxp[i] = &rxq->fake_mbuf;
+				_mm_store_si128((__m128i *)&rxdp[i].read,
+						dma_addr0);
+			}
+		}
+		rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+			rxq->rx_free_thresh;
+		return;
+	}
+
+	/* Initialize the mbufs in vector, process 2 mbufs in one loop */
+	for (i = 0; i < rxq->rx_free_thresh; i += 2, rxp += 2) {
+		__m128i vaddr0, vaddr1;
+
+		mb0 = rxp[0];
+		mb1 = rxp[1];
+
+		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
+		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+				offsetof(struct rte_mbuf, buf_addr) + 8);
+		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
+		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
+
+		/* convert pa to dma_addr hdr/data */
+		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
+		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
+
+		/* add headroom to pa values */
+		dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+		dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+
+		/* flush desc with pa dma_addr */
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr0);
+		_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);
+	}
+
+	rxq->rxrearm_start += rxq->rx_free_thresh;
+	if (rxq->rxrearm_start >= rxq->nb_rx_desc)
+		rxq->rxrearm_start = 0;
+
+	rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+	rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+			   (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1));
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+		   "rearm_start=%u rearm_nb=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rx_id, rxq->rxrearm_start, rxq->rxrearm_nb);
+
+	/* Update the tail pointer on the NIC */
+	AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+}
+
+static inline void
+desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4],
+		  struct rte_mbuf **rx_pkts)
+{
+	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
+	__m128i rearm0, rearm1, rearm2, rearm3;
+
+	__m128i vlan0, vlan1, rss, l3_l4e;
+
+	/* mask everything except RSS, flow director and VLAN flags
+	 * bit2 is for VLAN tag, bit11 for flow director indication
+	 * bit13:12 for RSS indication.
+	 */
+	const __m128i rss_vlan_msk = _mm_set_epi32(
+			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
+
+	const __m128i cksum_mask = _mm_set_epi32(
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD,
+			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
+			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
+			PKT_RX_EIP_CKSUM_BAD);
+
+	/* map rss and vlan type to rss hash and vlan flag */
+	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, 0);
+
+	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+			0, 0, 0, 0,
+			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
+			0, 0, PKT_RX_FDIR, 0);
+
+	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+			/* shift right 1 bit to make sure it not exceed 255 */
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
+			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD |
+			 PKT_RX_L4_CKSUM_BAD) >> 1,
+			(PKT_RX_EIP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_EIP_CKSUM_BAD) >> 1,
+			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
+			PKT_RX_IP_CKSUM_BAD >> 1,
+			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+
+	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
+	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
+	vlan0 = _mm_unpacklo_epi64(vlan0, vlan1);
+
+	vlan1 = _mm_and_si128(vlan0, rss_vlan_msk);
+	vlan0 = _mm_shuffle_epi8(vlan_flags, vlan1);
+
+	rss = _mm_srli_epi32(vlan1, 11);
+	rss = _mm_shuffle_epi8(rss_flags, rss);
+
+	l3_l4e = _mm_srli_epi32(vlan1, 22);
+	l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e);
+	/* then we shift left 1 bit */
+	l3_l4e = _mm_slli_epi32(l3_l4e, 1);
+	/* we need to mask out the reduntant bits */
+	l3_l4e = _mm_and_si128(l3_l4e, cksum_mask);
+
+	vlan0 = _mm_or_si128(vlan0, rss);
+	vlan0 = _mm_or_si128(vlan0, l3_l4e);
+
+	/* At this point, we have the 4 sets of flags in the low 16-bits
+	 * of each 32-bit value in vlan0.
+	 * We want to extract these, and merge them with the mbuf init data
+	 * so we can do a single 16-byte write to the mbuf to set the flags
+	 * and all the other initialization fields. Extracting the
+	 * appropriate flags means that we have to do a shift and blend for
+	 * each mbuf before we do the write.
+	 */
+	rearm0 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 8), 0x10);
+	rearm1 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vlan0, 4), 0x10);
+	rearm2 = _mm_blend_epi16(mbuf_init, vlan0, 0x10);
+	rearm3 = _mm_blend_epi16(mbuf_init, _mm_srli_si128(vlan0, 4), 0x10);
+
+	/* write the rearm data and the olflags in one write */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+			offsetof(struct rte_mbuf, rearm_data) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+			RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+	_mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0);
+	_mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1);
+	_mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2);
+	_mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3);
+}
+
+#define PKTLEN_SHIFT     10
+
+static inline void
+desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
+{
+	__m128i ptype0 = _mm_unpackhi_epi64(descs[0], descs[1]);
+	__m128i ptype1 = _mm_unpackhi_epi64(descs[2], descs[3]);
+	static const uint32_t type_table[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	ptype0 = _mm_srli_epi64(ptype0, 30);
+	ptype1 = _mm_srli_epi64(ptype1, 30);
+
+	rx_pkts[0]->packet_type = type_table[_mm_extract_epi8(ptype0, 0)];
+	rx_pkts[1]->packet_type = type_table[_mm_extract_epi8(ptype0, 8)];
+	rx_pkts[2]->packet_type = type_table[_mm_extract_epi8(ptype1, 0)];
+	rx_pkts[3]->packet_type = type_table[_mm_extract_epi8(ptype1, 8)];
+}
+
+/* Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+static inline uint16_t
+_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+		   uint16_t nb_pkts, uint8_t *split_packet)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **sw_ring;
+	uint16_t nb_pkts_recd;
+	int pos;
+	uint64_t var;
+	__m128i shuf_msk;
+
+	__m128i crc_adjust = _mm_set_epi16(
+				0, 0, 0,    /* ignore non-length fields */
+				-rxq->crc_len, /* sub crc on data_len */
+				0,          /* ignore high-16bits of pkt_len */
+				-rxq->crc_len, /* sub crc on pkt_len */
+				0, 0            /* ignore pkt_type field */
+			);
+	/* compile-time check the above crc_adjust layout is correct.
+	 * NOTE: the first field (lowest address) is given last in set_epi16
+	 * call above.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	__m128i dd_check, eop_check;
+
+	/* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */
+	nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST);
+
+	/* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */
+	nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP);
+
+	/* Just the act of getting into the function from the application is
+	 * going to cost about 7 cycles
+	 */
+	rxdp = rxq->rx_ring + rxq->rx_tail;
+
+	rte_prefetch0(rxdp);
+
+	/* See if we need to rearm the RX queue - gives the prefetch a bit
+	 * of time to act
+	 */
+	if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+		avf_rxq_rearm(rxq);
+
+	/* Before we start moving massive data around, check to see if
+	 * there is actually a packet available
+	 */
+	if (!(rxdp->wb.qword1.status_error_len &
+	      rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* 4 packets DD mask */
+	dd_check = _mm_set_epi64x(0x0000000100000001LL, 0x0000000100000001LL);
+
+	/* 4 packets EOP mask */
+	eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
+
+	/* mask to shuffle from desc. to mbuf */
+	shuf_msk = _mm_set_epi8(
+		7, 6, 5, 4,  /* octet 4~7, 32bits rss */
+		3, 2,        /* octet 2~3, low 16 bits vlan_macip */
+		15, 14,      /* octet 15~14, 16 bits data_len */
+		0xFF, 0xFF,  /* skip high 16 bits pkt_len, zero out */
+		15, 14,      /* octet 15~14, low 16 bits pkt_len */
+		0xFF, 0xFF, 0xFF, 0xFF /* pkt_type set as unknown */
+		);
+	/* Compile-time verify the shuffle mask
+	 * NOTE: some field positions already verified above, but duplicated
+	 * here for completeness in case of future modifications.
+	 */
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+			offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+
+	/* Cache is empty -> need to scan the buffer rings, but first move
+	 * the next 'n' mbufs into the cache
+	 */
+	sw_ring = &rxq->sw_ring[rxq->rx_tail];
+
+	/* A. load 4 packet in one loop
+	 * [A*. mask out 4 unused dirty field in desc]
+	 * B. copy 4 mbuf point from swring to rx_pkts
+	 * C. calc the number of DD bits among the 4 packets
+	 * [C*. extract the end-of-packet bit, if requested]
+	 * D. fill info. from desc to mbuf
+	 */
+
+	for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts;
+	     pos += AVF_VPMD_DESCS_PER_LOOP,
+	     rxdp += AVF_VPMD_DESCS_PER_LOOP) {
+		__m128i descs[AVF_VPMD_DESCS_PER_LOOP];
+		__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
+		__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
+		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
+		__m128i mbp1;
+#if defined(RTE_ARCH_X86_64)
+		__m128i mbp2;
+#endif
+
+		/* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */
+		mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
+		/* Read desc statuses backwards to avoid race condition */
+		/* A.1 load 4 pkts desc */
+		descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
+		rte_compiler_barrier();
+
+		/* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.1 load 2 64 bit mbuf points */
+		mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
+#endif
+
+		descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
+		rte_compiler_barrier();
+		/* B.1 load 2 mbuf point */
+		descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
+		rte_compiler_barrier();
+		descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
+
+#if defined(RTE_ARCH_X86_64)
+		/* B.2 copy 2 mbuf point into rx_pkts  */
+		_mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
+#endif
+
+		if (split_packet) {
+			rte_mbuf_prefetch_part2(rx_pkts[pos]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 1]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 2]);
+			rte_mbuf_prefetch_part2(rx_pkts[pos + 3]);
+		}
+
+		/* avoid compiler reorder optimization */
+		rte_compiler_barrier();
+
+		/* pkt 3,4 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len3 = _mm_slli_epi32(descs[3], PKTLEN_SHIFT);
+		const __m128i len2 = _mm_slli_epi32(descs[2], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[3] = _mm_blend_epi16(descs[3], len3, 0x80);
+		descs[2] = _mm_blend_epi16(descs[2], len2, 0x80);
+
+		/* D.1 pkt 3,4 convert format from desc to pktmbuf */
+		pkt_mb4 = _mm_shuffle_epi8(descs[3], shuf_msk);
+		pkt_mb3 = _mm_shuffle_epi8(descs[2], shuf_msk);
+
+		/* C.1 4=>2 status err info only */
+		sterr_tmp2 = _mm_unpackhi_epi32(descs[3], descs[2]);
+		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+
+		desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
+
+		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
+		pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
+		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
+
+		/* pkt 1,2 shift the pktlen field to be 16-bit aligned*/
+		const __m128i len1 = _mm_slli_epi32(descs[1], PKTLEN_SHIFT);
+		const __m128i len0 = _mm_slli_epi32(descs[0], PKTLEN_SHIFT);
+
+		/* merge the now-aligned packet length fields back in */
+		descs[1] = _mm_blend_epi16(descs[1], len1, 0x80);
+		descs[0] = _mm_blend_epi16(descs[0], len0, 0x80);
+
+		/* D.1 pkt 1,2 convert format from desc to pktmbuf */
+		pkt_mb2 = _mm_shuffle_epi8(descs[1], shuf_msk);
+		pkt_mb1 = _mm_shuffle_epi8(descs[0], shuf_msk);
+
+		/* C.2 get 4 pkts status err value  */
+		zero = _mm_xor_si128(dd_check, dd_check);
+		staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+
+		/* D.3 copy final 3,4 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 3]->rx_descriptor_fields1,
+			pkt_mb4);
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 2]->rx_descriptor_fields1,
+			pkt_mb3);
+
+		/* D.2 pkt 1,2 remove crc */
+		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
+		pkt_mb1 = _mm_add_epi16(pkt_mb1, crc_adjust);
+
+		/* C* extract and record EOP bit */
+		if (split_packet) {
+			__m128i eop_shuf_mask = _mm_set_epi8(
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0xFF, 0xFF, 0xFF, 0xFF,
+					0x04, 0x0C, 0x00, 0x08
+					);
+
+			/* and with mask to extract bits, flipping 1-0 */
+			__m128i eop_bits = _mm_andnot_si128(staterr, eop_check);
+			/* the staterr values are not in order, as the count
+			 * count of dd bits doesn't care. However, for end of
+			 * packet tracking, we do care, so shuffle. This also
+			 * compresses the 32-bit values to 8-bit
+			 */
+			eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+			/* store the resulting 32-bit value */
+			*(int *)split_packet = _mm_cvtsi128_si32(eop_bits);
+			split_packet += AVF_VPMD_DESCS_PER_LOOP;
+		}
+
+		/* C.3 calc available number of desc */
+		staterr = _mm_and_si128(staterr, dd_check);
+		staterr = _mm_packs_epi32(staterr, zero);
+
+		/* D.3 copy final 1,2 data to rx_pkts */
+		_mm_storeu_si128(
+			(void *)&rx_pkts[pos + 1]->rx_descriptor_fields1,
+			pkt_mb2);
+		_mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1,
+				 pkt_mb1);
+		desc_to_ptype_v(descs, &rx_pkts[pos]);
+		/* C.4 calc avaialbe number of desc */
+		var = __builtin_popcountll(_mm_cvtsi128_si64(staterr));
+		nb_pkts_recd += var;
+		if (likely(var != AVF_VPMD_DESCS_PER_LOOP))
+			break;
+	}
+
+	/* Update our internal tail pointer */
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_pkts_recd);
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));
+	rxq->rxrearm_nb = (uint16_t)(rxq->rxrearm_nb + nb_pkts_recd);
+
+	return nb_pkts_recd;
+}
+
+/* Notice:
+ * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+		  uint16_t nb_pkts)
+{
+	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
+}
+
+/* vPMD receive routine that reassembles scattered packets
+ * Notice:
+ * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet
+ * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST
+ *   numbers of DD bits
+ */
+uint16_t
+avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
+			    uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = rx_queue;
+	uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0};
+	unsigned int i = 0;
+
+	/* get some new buffers */
+	uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts,
+					      split_flags);
+	if (nb_bufs == 0)
+		return 0;
+
+	/* happy day case, full burst + no packets to be joined */
+	const uint64_t *split_fl64 = (uint64_t *)split_flags;
+
+	if (!rxq->pkt_first_seg &&
+	    split_fl64[0] == 0 && split_fl64[1] == 0 &&
+	    split_fl64[2] == 0 && split_fl64[3] == 0)
+		return nb_bufs;
+
+	/* reassemble any packets that need reassembly*/
+	if (!rxq->pkt_first_seg) {
+		/* find the first split flag, and only reassemble then*/
+		while (i < nb_bufs && !split_flags[i])
+			i++;
+		if (i == nb_bufs)
+			return nb_bufs;
+	}
+	return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i,
+		&split_flags[i]);
+}
+
+static inline void
+vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags)
+{
+	uint64_t high_qw =
+			(AVF_TX_DESC_DTYPE_DATA |
+			 ((uint64_t)flags  << AVF_TXD_QW1_CMD_SHIFT) |
+			 ((uint64_t)pkt->data_len <<
+			  AVF_TXD_QW1_TX_BUF_SZ_SHIFT));
+
+	__m128i descriptor = _mm_set_epi64x(high_qw,
+					    pkt->buf_iova + pkt->data_off);
+	_mm_store_si128((__m128i *)txdp, descriptor);
+}
+
+static inline void
+avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt,
+	uint16_t nb_pkts,  uint64_t flags)
+{
+	int i;
+
+	for (i = 0; i < nb_pkts; ++i, ++txdp, ++pkt)
+		vtx1(txdp, *pkt, flags);
+}
+
+uint16_t
+avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue;
+	volatile struct avf_tx_desc *txdp;
+	struct avf_tx_entry *txep;
+	uint16_t n, nb_commit, tx_id;
+	uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04;  /* bit 2 must be set */
+	uint64_t rs = AVF_TX_DESC_CMD_RS | flags;
+	int i;
+
+	/* cross rx_thresh boundary is not allowed */
+	nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh);
+
+	if (txq->nb_free < txq->free_thresh)
+		avf_tx_free_bufs(txq);
+
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+	nb_commit = nb_pkts;
+
+	tx_id = txq->tx_tail;
+	txdp = &txq->tx_ring[tx_id];
+	txep = &txq->sw_ring[tx_id];
+
+	txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts);
+
+	n = (uint16_t)(txq->nb_tx_desc - tx_id);
+	if (nb_commit >= n) {
+		tx_backlog_entry(txep, tx_pkts, n);
+
+		for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp)
+			vtx1(txdp, *tx_pkts, flags);
+
+		vtx1(txdp, *tx_pkts++, rs);
+
+		nb_commit = (uint16_t)(nb_commit - n);
+
+		tx_id = 0;
+		txq->next_rs = (uint16_t)(txq->rs_thresh - 1);
+
+		/* avoid reach the end of ring */
+		txdp = &txq->tx_ring[tx_id];
+		txep = &txq->sw_ring[tx_id];
+	}
+
+	tx_backlog_entry(txep, tx_pkts, nb_commit);
+
+	avf_vtx(txdp, tx_pkts, nb_commit, flags);
+
+	tx_id = (uint16_t)(tx_id + nb_commit);
+	if (tx_id > txq->next_rs) {
+		txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |=
+			rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) <<
+					 AVF_TXD_QW1_CMD_SHIFT);
+		txq->next_rs =
+			(uint16_t)(txq->next_rs + txq->rs_thresh);
+	}
+
+	txq->tx_tail = tx_id;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u",
+		   txq->port_id, txq->queue_id, tx_id, nb_pkts);
+
+	AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+void __attribute__((cold))
+avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq)
+{
+	_avf_rx_queue_release_mbufs_vec(rxq);
+}
+
+static void __attribute__((cold))
+avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq)
+{
+	_avf_tx_queue_release_mbufs_vec(txq);
+}
+
+static const struct avf_rxq_ops sse_vec_rxq_ops = {
+	.release_mbufs = avf_rx_queue_release_mbufs_sse,
+};
+
+static const struct avf_txq_ops sse_vec_txq_ops = {
+	.release_mbufs = avf_tx_queue_release_mbufs_sse,
+};
+
+int __attribute__((cold))
+avf_txq_vec_setup(struct avf_tx_queue *txq)
+{
+	txq->ops = &sse_vec_txq_ops;
+	return 0;
+}
+
+int __attribute__((cold))
+avf_rxq_vec_setup(struct avf_rx_queue *rxq)
+{
+	rxq->ops = &sse_vec_rxq_ops;
+	return avf_rxq_vec_setup_default(rxq);
+}
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 13/14] net/avf: enable bulk allocate Rx func
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (11 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
  2018-01-10 19:14           ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Ferruh Yigit
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Wenzhuo Lu

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/avf/avf.h        |   1 +
 drivers/net/avf/avf_ethdev.c |   1 +
 drivers/net/avf/avf_rxtx.c   | 300 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/avf/avf_rxtx.h   |   6 +
 4 files changed, 308 insertions(+)

diff --git a/drivers/net/avf/avf.h b/drivers/net/avf/avf.h
index b79bc5a..ea0f7d8 100644
--- a/drivers/net/avf/avf.h
+++ b/drivers/net/avf/avf.h
@@ -120,6 +120,7 @@ struct avf_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct avf_info vf;
 
+	bool rx_bulk_alloc_allowed;
 	/* For vector PMD */
 	bool rx_vec_allowed;
 	bool tx_vec_allowed;
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index 127fdb5..d9f7cea 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -121,6 +121,7 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_info *vf =  AVF_DEV_PRIVATE_TO_VF(ad);
 	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
 
+	ad->rx_bulk_alloc_allowed = true;
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	/* Initialize to TRUE. If any of Rx queues doesn't meet the
 	 * vector Rx/Tx preconditions, it will be reset.
diff --git a/drivers/net/avf/avf_rxtx.c b/drivers/net/avf/avf_rxtx.c
index b542532..e0c4583 100644
--- a/drivers/net/avf/avf_rxtx.c
+++ b/drivers/net/avf/avf_rxtx.c
@@ -120,6 +120,27 @@
 }
 #endif
 
+static inline bool
+check_rx_bulk_allow(struct avf_rx_queue *rxq)
+{
+	int ret = TRUE;
+
+	if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "AVF_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, AVF_RX_MAX_BURST);
+		ret = FALSE;
+	} else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = FALSE;
+	}
+	return ret;
+}
+
 static inline void
 reset_rx_queue(struct avf_rx_queue *rxq)
 {
@@ -138,6 +159,11 @@
 	for (i = 0; i < AVF_RX_MAX_BURST; i++)
 		rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
 
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
 	rxq->rx_tail = 0;
 	rxq->nb_rx_hold = 0;
 	rxq->pkt_first_seg = NULL;
@@ -233,6 +259,17 @@
 			rxq->sw_ring[i] = NULL;
 		}
 	}
+
+	/* for rx bulk */
+	if (rxq->rx_nb_avail == 0)
+		return;
+	for (i = 0; i < rxq->rx_nb_avail; i++) {
+		struct rte_mbuf *mbuf;
+
+		mbuf = rxq->rx_stage[rxq->rx_next_avail + i];
+		rte_pktmbuf_free_seg(mbuf);
+	}
+	rxq->rx_nb_avail = 0;
 }
 
 static inline void
@@ -363,6 +400,19 @@
 	rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id);
 	rxq->ops = &def_rxq_ops;
 
+	if (check_rx_bulk_allow(rxq) == TRUE) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "satisfied. Rx Burst Bulk Alloc function will be "
+			     "used on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+			     "not satisfied, Scattered Rx is requested "
+			     "on port=%d, queue=%d.",
+			     rxq->port_id, rxq->queue_id);
+		ad->rx_bulk_alloc_allowed = false;
+	}
+
 #ifdef RTE_LIBRTE_AVF_INC_VECTOR
 	if (check_rx_vec_allow(rxq) == FALSE)
 		ad->rx_vec_allowed = false;
@@ -1036,6 +1086,252 @@
 	return nb_rx;
 }
 
+#define AVF_LOOK_AHEAD 8
+static inline int
+avf_rx_scan_hw_ring(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t qword1;
+	uint32_t rx_status;
+	int32_t s[AVF_LOOK_AHEAD], nb_dd;
+	int32_t i, j, nb_rx = 0;
+	uint64_t pkt_flags;
+	static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = {
+		/* [0] reserved */
+		[1] = RTE_PTYPE_L2_ETHER,
+		/* [2] - [21] reserved */
+		[22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_FRAG,
+		[23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_NONFRAG,
+		[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_UDP,
+		/* [25] reserved */
+		[26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_TCP,
+		[27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_SCTP,
+		[28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+			RTE_PTYPE_L4_ICMP,
+		/* All others reserved */
+	};
+
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len);
+	rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+		    AVF_RXD_QW1_STATUS_SHIFT;
+
+	/* Make sure there is at least 1 packet to receive */
+	if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT)))
+		return 0;
+
+	/* Scan LOOK_AHEAD descriptors at a time to determine which
+	 * descriptors reference packets that are ready to be received.
+	 */
+	for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD,
+	     rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) {
+			qword1 = rte_le_to_cpu_64(
+				rxdp[j].wb.qword1.status_error_len);
+			s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >>
+			       AVF_RXD_QW1_STATUS_SHIFT;
+		}
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++)
+			nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf parameters */
+		for (j = 0; j < nb_dd; j++) {
+			AVF_DUMP_RX_DESC(rxq, &rxdp[j],
+					 rxq->rx_tail + i * AVF_LOOK_AHEAD + j);
+
+			mb = rxep[j];
+			qword1 = rte_le_to_cpu_64
+					(rxdp[j].wb.qword1.status_error_len);
+			pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >>
+				  AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->ol_flags = 0;
+			avf_rxd_to_vlan_tci(mb, &rxdp[j]);
+			pkt_flags = avf_rxd_to_pkt_flags(qword1);
+			mb->packet_type =
+				ptype_tbl[(uint8_t)((qword1 &
+				AVF_RXD_QW1_PTYPE_MASK) >>
+				AVF_RXD_QW1_PTYPE_SHIFT)];
+
+			if (pkt_flags & PKT_RX_RSS_HASH)
+				mb->hash.rss = rte_le_to_cpu_32(
+					rxdp[j].wb.qword0.hi_dword.rss);
+
+			mb->ol_flags |= pkt_flags;
+		}
+
+		for (j = 0; j < AVF_LOOK_AHEAD; j++)
+			rxq->rx_stage[i + j] = rxep[j];
+
+		if (nb_dd != AVF_LOOK_AHEAD)
+			break;
+	}
+
+	/* Clear software ring entries */
+	for (i = 0; i < nb_rx; i++)
+		rxq->sw_ring[rxq->rx_tail + i] = NULL;
+
+	return nb_rx;
+}
+
+static inline uint16_t
+avf_rx_fill_from_stage(struct avf_rx_queue *rxq,
+		       struct rte_mbuf **rx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t i;
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	for (i = 0; i < nb_pkts; i++)
+		rx_pkts[i] = stage[i];
+
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline int
+avf_rx_alloc_bufs(struct avf_rx_queue *rxq)
+{
+	volatile union avf_rx_desc *rxdp;
+	struct rte_mbuf **rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx, i;
+	uint64_t dma_addr;
+	int diag;
+
+	/* Allocate buffers in bulk */
+	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
+				(rxq->rx_free_thresh - 1));
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0)) {
+		PMD_RX_LOG(ERR, "Failed to get mbufs in bulk");
+		return -ENOMEM;
+	}
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; i++) {
+		if (likely(i < (rxq->rx_free_thresh - 1)))
+			/* Prefetch next mbuf */
+			rte_prefetch0(rxep[i + 1]);
+
+		mb = rxep[i];
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->next = NULL;
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+		mb->nb_segs = 1;
+		mb->port = rxq->port_id;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		rxdp[i].read.hdr_addr = 0;
+		rxdp[i].read.pkt_addr = dma_addr;
+	}
+
+	/* Update rx tail register */
+	rte_wmb();
+	AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger);
+
+	rxq->rx_free_trigger =
+		(uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh);
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	return 0;
+}
+
+static inline uint16_t
+rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev;
+	uint16_t nb_rx = 0;
+
+	if (!nb_pkts)
+		return 0;
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq);
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		if (avf_rx_alloc_bufs(rxq) != 0) {
+			uint16_t i, j;
+
+			/* TODO: count rx_mbuf_alloc_failed here */
+
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++)
+				rxq->sw_ring[j] = rxq->rx_stage[i];
+
+			return 0;
+		}
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u, nb_rx=%u",
+		   rxq->port_id, rxq->queue_id,
+		   rxq->rx_tail, nb_rx);
+
+	if (rxq->rx_nb_avail)
+		return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+static uint16_t
+avf_recv_pkts_bulk_alloc(void *rx_queue,
+			 struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	uint16_t nb_rx = 0, n, count;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= AVF_RX_MAX_BURST))
+		return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	while (nb_pkts) {
+		n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST);
+		count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + count);
+		nb_pkts = (uint16_t)(nb_pkts - count);
+		if (count < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 static inline int
 avf_xmit_cleanup(struct avf_tx_queue *txq)
 {
@@ -1467,6 +1763,10 @@
 		PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = avf_recv_scattered_pkts;
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc;
 	} else {
 		PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).",
 			    dev->data->port_id);
diff --git a/drivers/net/avf/avf_rxtx.h b/drivers/net/avf/avf_rxtx.h
index 82fd801..d1701cd 100644
--- a/drivers/net/avf/avf_rxtx.h
+++ b/drivers/net/avf/avf_rxtx.h
@@ -83,6 +83,12 @@ struct avf_rx_queue {
 	uint16_t rxrearm_start;    /* the idx we start the re-arming from */
 	uint64_t mbuf_initializer; /* value to init mbufs */
 
+	/* for rx bulk */
+	uint16_t rx_nb_avail;      /* number of staged packets ready */
+	uint16_t rx_next_avail;    /* index of next staged packets */
+	uint16_t rx_free_trigger;  /* triggers rx buffer allocation */
+	struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */
+
 	uint16_t port_id;        /* device port ID */
 	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
 	uint16_t queue_id;      /* Rx queue index */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* [dpdk-dev] [PATCH v7 14/14] net/avf: enable Rx interrupt support
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (12 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
@ 2018-01-10 13:02           ` Wenzhuo Lu
  2018-01-10 19:14           ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Ferruh Yigit
  14 siblings, 0 replies; 151+ messages in thread
From: Wenzhuo Lu @ 2018-01-10 13:02 UTC (permalink / raw)
  To: dev; +Cc: Jingjing Wu

From: Jingjing Wu <jingjing.wu@intel.com>

Update the doc for the AVF features either.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/features/avf.ini       |   1 +
 doc/guides/nics/features/avf_vec.ini   |   1 +
 doc/guides/nics/intel_vf.rst           |  20 +++-
 doc/guides/rel_notes/release_18_02.rst |  16 +++
 drivers/net/avf/avf_ethdev.c           | 204 +++++++++++++++++++++++++++------
 5 files changed, 204 insertions(+), 38 deletions(-)

diff --git a/doc/guides/nics/features/avf.ini b/doc/guides/nics/features/avf.ini
index da4d81b..ccb9edd 100644
--- a/doc/guides/nics/features/avf.ini
+++ b/doc/guides/nics/features/avf.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/features/avf_vec.ini b/doc/guides/nics/features/avf_vec.ini
index 45dd5e5..8924994 100644
--- a/doc/guides/nics/features/avf_vec.ini
+++ b/doc/guides/nics/features/avf_vec.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Rx interrupt         = Y
 Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 1e83bf6..66f90b1 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -28,8 +28,8 @@
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-I40E/IXGBE/IGB Virtual Function Driver
-======================================
+Intel Virtual Function Driver
+=============================
 
 Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
 support the following modes of operation in a virtualized environment:
@@ -93,6 +93,22 @@ and the Physical Function operates on the global resources on behalf of the Virt
 For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
 which is called a "Mailbox".
 
+Intel® Ethernet Adaptive Virtual Function
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Adaptive Virtual Function (AVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
+AVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
+every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
+advanced features based on a device agnostic way without ever compromising on the base functionality. AVF provides generic hardware interface and
+interface between AVF driver and a compliant PF driver is specified.
+
+Intel products starting Ethernet Controller 710 Series to support Adaptive Virtual Function.
+
+The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
+
+For more detail on SR-IOV, please refer to the following documents:
+
+*   `Intel® AVF HAS <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
+
 The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/doc/guides/rel_notes/release_18_02.rst b/doc/guides/rel_notes/release_18_02.rst
index 24b67bb..0672b0e 100644
--- a/doc/guides/rel_notes/release_18_02.rst
+++ b/doc/guides/rel_notes/release_18_02.rst
@@ -41,6 +41,22 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Add AVF (Adaptive Virtual Function) net PMD.**
+
+     A new net PMD has been added, which supports Intel® Ethernet Adaptive
+     Virtual Function (AVF) with features list below:
+
+     * Basic Rx/Tx burst
+     * SSE vectorized Rx/Tx burst
+     * Promiscuous mode
+     * MAC/VLAN offload
+     * Checksum offload
+     * TSO offload
+     * Jumbo frame and MTU setting
+     * RSS configuration
+     * stats
+     * Rx/Tx descriptor status
+     * Link status update/event
 
 API Changes
 -----------
diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
index d9f7cea..13f6329 100644
--- a/drivers/net/avf/avf_ethdev.c
+++ b/drivers/net/avf/avf_ethdev.c
@@ -67,9 +67,14 @@ static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 					 struct ether_addr *mac_addr);
+static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+					uint16_t queue_id);
+static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+					 uint16_t queue_id);
 
 int avf_logtype_init;
 int avf_logtype_driver;
+
 static const struct rte_pci_id pci_id_avf_map[] = {
 	{ RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
@@ -111,6 +116,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	.rx_descriptor_status       = avf_dev_rx_desc_status,
 	.tx_descriptor_status       = avf_dev_tx_desc_status,
 	.mtu_set                    = avf_dev_mtu_set,
+	.rx_queue_intr_enable       = avf_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = avf_dev_rx_queue_intr_disable,
 };
 
 static int
@@ -275,6 +282,99 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t interval, i;
+	int vec;
+
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		vf->nb_msix = 1;
+		if (vf->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			vf->msix_base = AVF_RX_VEC_START;
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
+				      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
+				      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			vf->msix_base = AVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval = avf_calc_itr_interval(
+					AVF_QUEUE_ITR_INTERVAL_MAX);
+			AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+				      AVFINT_DYN_CTL01_INTENA_MASK |
+				      (AVF_ITR_INDEX_DEFAULT <<
+				       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				      (interval <<
+				       AVFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		AVF_WRITE_FLUSH(hw);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			vf->rxq_map[0] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			vf->nb_msix = 1;
+			vf->msix_base = AVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[0] |= 1 << i;
+				intr_handle->intr_vec[i] = AVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector 0 are mapping to all Rx queues");
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			vf->msix_base = AVF_RX_VEC_START;
+			vec = AVF_RX_VEC_START;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				vf->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= vf->nb_msix)
+					vec = AVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    vf->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (avf_config_irq_map(adapter)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 avf_start_queues(struct rte_eth_dev *dev)
 {
@@ -314,8 +414,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = dev->intr_handle;
-	uint16_t interval;
-	int i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -325,8 +423,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
 				      dev->data->nb_tx_queues);
 
-	/* TODO: Rx interrupt */
-
 	if (avf_init_queues(dev) != 0) {
 		PMD_DRV_LOG(ERR, "failed to do Queue init");
 		return -1;
@@ -344,36 +440,15 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_queue;
 	}
 
-	/* Map interrupt for writeback */
-	vf->nb_msix = 1;
-	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
-		/* If WB_ON_ITR supports, enable it */
-		vf->msix_base = AVF_RX_VEC_START;
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1),
-			      AVFINT_DYN_CTLN1_ITR_INDX_MASK |
-			      AVFINT_DYN_CTLN1_WB_ON_ITR_MASK);
-	} else {
-		/* If no WB_ON_ITR offload flags, need to set interrupt for
-		 * descriptor write back.
-		 */
-		vf->msix_base = AVF_MISC_VEC_ID;
-
-		/* set ITR to max */
-		interval = avf_calc_itr_interval(AVF_QUEUE_ITR_INTERVAL_MAX);
-		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
-			      AVFINT_DYN_CTL01_INTENA_MASK |
-			      (AVF_ITR_INDEX_DEFAULT <<
-			       AVFINT_DYN_CTL01_ITR_INDX_SHIFT) |
-			      (interval << AVFINT_DYN_CTL01_INTERVAL_SHIFT));
-	}
-	AVF_WRITE_FLUSH(hw);
-	/* map all queues to the same interrupt */
-	for (i = 0; i < dev->data->nb_rx_queues; i++)
-		vf->rxq_map[0] |= 1 << i;
-	if (avf_config_irq_map(adapter)) {
-		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+	if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) {
+		PMD_DRV_LOG(ERR, "configure irq failed");
 		goto err_queue;
 	}
+	/* re-enable intr again, because efd assign may change */
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
 
 	/* Set all mac addrs */
 	avf_add_del_all_mac_addr(adapter, TRUE);
@@ -383,7 +458,6 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 		goto err_mac;
 	}
 
-	/* TODO: enable interrupt for RX interrupt */
 	return 0;
 
 err_mac:
@@ -399,6 +473,8 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 	struct avf_adapter *adapter =
 		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	int ret, i;
 
 	PMD_INIT_FUNC_TRACE();
@@ -408,9 +484,13 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 
 	avf_stop_queues(dev);
 
-	/*TODO: Disable the interrupt for Rx*/
-
-	/* TODO: Rx interrupt vector mapping free */
+	/* Disable the interrupt for Rx */
+	rte_intr_efd_disable(intr_handle);
+	/* Rx interrupt vector mapping free */
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
 
 	/* remove all mac addrs */
 	avf_add_del_all_mac_addr(adapter, FALSE);
@@ -913,6 +993,58 @@ static void avf_dev_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 static int
+avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(INFO, "MISC is also enabled for control");
+		AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
+			      AVFINT_DYN_CTL01_INTENA_MASK |
+			      AVFINT_DYN_CTL01_ITR_INDX_MASK);
+	} else {
+		AVF_WRITE_REG(hw,
+			      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+			      AVFINT_DYN_CTLN1_INTENA_MASK |
+			      AVFINT_DYN_CTLN1_ITR_INDX_MASK);
+	}
+
+	AVF_WRITE_FLUSH(hw);
+
+	rte_intr_enable(&pci_dev->intr_handle);
+
+	return 0;
+}
+
+static int
+avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct avf_adapter *adapter =
+		AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t msix_intr;
+
+	msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+	if (msix_intr == AVF_MISC_VEC_ID) {
+		PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
+		return -EIO;
+	}
+
+	AVF_WRITE_REG(hw,
+		      AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START),
+		      0);
+
+	AVF_WRITE_FLUSH(hw);
+	return 0;
+}
+
+static int
 avf_check_vf_reset_done(struct avf_hw *hw)
 {
 	int i, reset;
-- 
1.9.3

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of " Wenzhuo Lu
@ 2018-01-10 17:15           ` Stephen Hemminger
  2018-01-11  2:07             ` Lu, Wenzhuo
  2018-01-10 17:17           ` Stephen Hemminger
  1 sibling, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2018-01-10 17:15 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev, Jingjing Wu

On Wed, 10 Jan 2018 14:15:49 +0800
Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:

> +
> +#define AVF_MAX_NUM_QUEUES       16
> +/* Vlan table size */
> +#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))

You could use ETHER_MAX_VLAN_ID (which is 4095).
Also it is most efficient if bit tables use unsigned long to access.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of " Wenzhuo Lu
  2018-01-10 17:15           ` Stephen Hemminger
@ 2018-01-10 17:17           ` Stephen Hemminger
  2018-01-11  4:52             ` Lu, Wenzhuo
  1 sibling, 1 reply; 151+ messages in thread
From: Stephen Hemminger @ 2018-01-10 17:17 UTC (permalink / raw)
  To: Wenzhuo Lu; +Cc: dev, Jingjing Wu

On Wed, 10 Jan 2018 14:15:49 +0800
Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:

> +/* spinlock func for base code */
> +void
> +avf_init_spinlock_d(struct avf_spinlock *sp)
> +{
> +	rte_spinlock_init(&sp->spinlock);
> +}
> +
> +void
> +avf_acquire_spinlock_d(struct avf_spinlock *sp)
> +{
> +	rte_spinlock_lock(&sp->spinlock);
> +}
> +
> +void
> +avf_release_spinlock_d(struct avf_spinlock *sp)
> +{
> +	rte_spinlock_unlock(&sp->spinlock);
> +}
> +
> +void
> +avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp)
> +{
> +}

You might want to inline these (in a header file) if in critical path.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD
  2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
                             ` (13 preceding siblings ...)
  2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
@ 2018-01-10 19:14           ` Ferruh Yigit
  14 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-10 19:14 UTC (permalink / raw)
  To: Wenzhuo Lu, dev

On 1/10/2018 1:01 PM, Wenzhuo Lu wrote:
> Adaptive Virtual Function (AVF) Driver is VF driver which supports for all future Intel devices without requiring a VM update.
> It promises the basic high speed connectivity. And since this happens to be an adaptive VF driver, every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those advanced features. Most importantly in a device agnostic way without ever compromising on the base functionality. All the AVF's interface need to follow AVF spec, and AVF compliant interface is supported start from the Intel? Ethernet Controller 710 Series.
> 
> This patch set adds AVF PMD supporting.
>  - Device initialization
>  - Queue setup and Device start
>  - Basic Rx and Tx.
>  - MAC address offload feature
>  - Vlan offload feature
>  - RSS offload feature
>  - Vectored Rx and Tx func
>  - Bulk allocate Rx func
>  - Rx interrupt support
>  - Statistics query
> 
> v7:
>  - fix compile error on ARM machine.
> 
> v6:
>  - handle ICC compile warning on 32bit machine.
> 
> v5:
>  - some slight change for the comments.
>  - merge the doc update patch.
> 
> v4:
>  - update the base code to the newest.
> 
> v3:
>  - change the license announcement.
>  - update the related document.
>  - resolve the checkpatch error, warning and some check.
>  - handle the comments from the community.
> 
> v2:
>  - rebase to 17.11
>  - add vectored Rx and Tx func
>  - add bulk allocate Rx func
>  - add Rx interrupt support
>  - add statistics query
>  - fix coding style issue
>  - remove extra compile flags in Makefile
>  - add doc to list avf PMD features
>  - fix lut setting when rss is disabled
>  - fix log init missing
>  - remove rx_descriptor_done
> 
> Jingjing Wu (12):
>   net/avf/base: add base code for avf PMD
>   net/avf: initialization of avf PMD
>   net/avf: enable queue and device
>   net/avf: enable link status update
>   net/avf: support stats
>   net/avf: enable MAC VLAN and promisc ops
>   net/avf: enable ops for RSS setting
>   net/avf: enable ops for MTU setting
>   net/avf: enable ops to check queue info and status
>   net/i40e: support AVF basic interface
>   net/avf: enable sse vector Rx Tx func
>   net/avf: enable Rx interrupt support
> 
> Wenzhuo Lu (2):
>   net/avf: enable basic Rx Tx func
>   net/avf: enable bulk allocate Rx func

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-10 17:15           ` Stephen Hemminger
@ 2018-01-11  2:07             ` Lu, Wenzhuo
  2018-01-11  8:53               ` Ferruh Yigit
  0 siblings, 1 reply; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-11  2:07 UTC (permalink / raw)
  To: Stephen Hemminger, Yigit, Ferruh; +Cc: dev, Wu, Jingjing

Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, January 11, 2018 1:15 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
> 
> On Wed, 10 Jan 2018 14:15:49 +0800
> Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:
> 
> > +
> > +#define AVF_MAX_NUM_QUEUES       16
> > +/* Vlan table size */
> > +#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
> 
> You could use ETHER_MAX_VLAN_ID (which is 4095).
> Also it is most efficient if bit tables use unsigned long to access.
Thanks for the suggestion.
I found this macro is useless. I'd like just removing it.

Hi Ferruh,
As this patch set is accepted to next-net, I can sent a fixes patch for this change. Is it OK? Would you like helping merge the fixes to the original patch? Thanks.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-10 17:17           ` Stephen Hemminger
@ 2018-01-11  4:52             ` Lu, Wenzhuo
  0 siblings, 0 replies; 151+ messages in thread
From: Lu, Wenzhuo @ 2018-01-11  4:52 UTC (permalink / raw)
  To: Stephen Hemminger, Yigit, Ferruh; +Cc: dev, Wu, Jingjing

Hi Stephen,


> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, January 11, 2018 1:18 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
> 
> On Wed, 10 Jan 2018 14:15:49 +0800
> Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:
> 
> > +/* spinlock func for base code */
> > +void
> > +avf_init_spinlock_d(struct avf_spinlock *sp) {
> > +	rte_spinlock_init(&sp->spinlock);
> > +}
> > +
> > +void
> > +avf_acquire_spinlock_d(struct avf_spinlock *sp) {
> > +	rte_spinlock_lock(&sp->spinlock);
> > +}
> > +
> > +void
> > +avf_release_spinlock_d(struct avf_spinlock *sp) {
> > +	rte_spinlock_unlock(&sp->spinlock);
> > +}
> > +
> > +void
> > +avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp) { }
> 
> You might want to inline these (in a header file) if in critical path.
Thanks for the comments. Will send a patch for it.

Hi Ferruh,
The same question. I'll send a patch base on next-net to change it. If it's not OK and I need to rework the whole patch set, please let me know. Thanks.

^ permalink raw reply	[flat|nested] 151+ messages in thread

* Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
  2018-01-11  2:07             ` Lu, Wenzhuo
@ 2018-01-11  8:53               ` Ferruh Yigit
  0 siblings, 0 replies; 151+ messages in thread
From: Ferruh Yigit @ 2018-01-11  8:53 UTC (permalink / raw)
  To: Lu, Wenzhuo, Stephen Hemminger; +Cc: dev, Wu, Jingjing

On 1/11/2018 2:07 AM, Lu, Wenzhuo wrote:
> Hi Stephen,
> 
>> -----Original Message-----
>> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
>> Sent: Thursday, January 11, 2018 1:15 AM
>> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of avf PMD
>>
>> On Wed, 10 Jan 2018 14:15:49 +0800
>> Wenzhuo Lu <wenzhuo.lu@intel.com> wrote:
>>
>>> +
>>> +#define AVF_MAX_NUM_QUEUES       16
>>> +/* Vlan table size */
>>> +#define AVF_VLAN_TB_SIZE               (4096 / (CHAR_BIT * sizeof(uint32_t)))
>>
>> You could use ETHER_MAX_VLAN_ID (which is 4095).
>> Also it is most efficient if bit tables use unsigned long to access.
> Thanks for the suggestion.
> I found this macro is useless. I'd like just removing it.
> 
> Hi Ferruh,
> As this patch set is accepted to next-net, I can sent a fixes patch for this change. Is it OK? Would you like helping merge the fixes to the original patch? Thanks.

Hi Wenzhuo,

That is OK, I can squash fixes on top original set in next-net.

^ permalink raw reply	[flat|nested] 151+ messages in thread

end of thread, other threads:[~2018-01-11  8:53 UTC | newest]

Thread overview: 151+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-20  8:26 [dpdk-dev] [RFC 0/9] add new avf PMD Jingjing Wu
2017-10-20  8:26 ` [dpdk-dev] [RFC 1/9] net/avf/base: add base code for " Jingjing Wu
2017-10-20  8:26 ` [dpdk-dev] [RFC 2/9] net/avf: initilization of " Jingjing Wu
2017-11-22  0:02   ` Ferruh Yigit
2017-10-20  8:26 ` [dpdk-dev] [RFC 3/9] net/avf: enable queue and device Jingjing Wu
2017-11-22  0:04   ` Ferruh Yigit
2017-10-20  8:26 ` [dpdk-dev] [RFC 4/9] net/avf: enable basic Rx Tx func Jingjing Wu
2017-11-22  0:06   ` Ferruh Yigit
2017-11-22  0:57     ` Stephen Hemminger
2017-11-22 23:15       ` Ferruh Yigit
2017-11-22  7:55     ` Wu, Jingjing
2017-11-22 22:38       ` Ferruh Yigit
2017-11-23  1:17         ` Wu, Jingjing
2017-10-20  8:26 ` [dpdk-dev] [RFC 5/9] net/avf: enable link status update Jingjing Wu
2017-10-20  8:26 ` [dpdk-dev] [RFC 6/9] net/avf: enable ops for MAC VLAN offload Jingjing Wu
2017-11-22  0:07   ` Ferruh Yigit
2017-10-20  8:26 ` [dpdk-dev] [RFC 7/9] net/avf: enable ops for rss setting Jingjing Wu
2017-11-22  0:07   ` Ferruh Yigit
2017-10-20  8:26 ` [dpdk-dev] [RFC 8/9] net/avf: enable ops to check queue info and status Jingjing Wu
2017-11-22  0:09   ` Ferruh Yigit
2017-11-22  8:23     ` Wu, Jingjing
2017-10-20  8:26 ` [dpdk-dev] [RFC 9/9] net/i40e: support AVF basic interface Jingjing Wu
2017-11-21 23:58 ` [dpdk-dev] [RFC 0/9] add new avf PMD Ferruh Yigit
2017-11-24  6:33 ` [dpdk-dev] [PATCH v2 00/14] " Jingjing Wu
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 01/14] net/avf/base: add base code for " Jingjing Wu
2017-12-04 19:50     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 02/14] net/avf: initilization of " Jingjing Wu
2017-12-04 19:52     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 03/14] net/avf: enable queue and device Jingjing Wu
2017-12-04  8:45     ` Xing, Beilei
2017-12-04 19:56     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 04/14] net/avf: enable basic Rx Tx func Jingjing Wu
2017-12-04 19:57     ` Ferruh Yigit
2017-12-27  3:07       ` Wu, Jingjing
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 05/14] net/avf: enable link status update Jingjing Wu
2017-12-04 19:58     ` Ferruh Yigit
2017-12-27  3:07       ` Wu, Jingjing
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 06/14] net/avf: enable ops to get stats Jingjing Wu
2017-12-04 19:58     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 07/14] net/avf: enable ops for MAC VLAN offload Jingjing Wu
2017-12-04 19:59     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 08/14] net/avf: enable ops for RSS setting Jingjing Wu
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 09/14] net/avf: enable ops for MTU setting Jingjing Wu
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 10/14] net/avf: enable ops to check queue info and status Jingjing Wu
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 11/14] net/i40e: support AVF basic interface Jingjing Wu
2017-12-04 20:04     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 12/14] net/avf: enable sse vector Rx Tx func Jingjing Wu
2017-12-04 20:01     ` Ferruh Yigit
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 13/14] net/avf: enable bulk allocate Rx func Jingjing Wu
2017-11-24  6:33   ` [dpdk-dev] [PATCH v2 14/14] net/avf: enable Rx interrupt support Jingjing Wu
2017-12-04 20:02     ` Ferruh Yigit
2017-12-04 19:48   ` [dpdk-dev] [PATCH v2 00/14] add new avf PMD Ferruh Yigit
2018-01-04  5:27   ` [dpdk-dev] [PATCH v3 00/15] " Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 01/15] net/avf/base: add base code for " Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 02/15] net/avf: initialization of " Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 03/15] net/avf: enable queue and device Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 05/15] net/avf: enable link status update Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 06/15] net/avf: support stats Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
2018-01-04  5:27     ` [dpdk-dev] [PATCH v3 15/15] doc: update doc for avf driver Wenzhuo Lu
2018-01-05  8:21   ` [dpdk-dev] [PATCH v4 00/15] add new AVF PMD Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 01/15] net/avf/base: add base code for avf PMD Wenzhuo Lu
2018-01-05 20:25       ` Stephen Hemminger
2018-01-08  1:06         ` Lu, Wenzhuo
2018-01-08 15:27           ` Stephen Hemminger
2018-01-09  1:35             ` Lu, Wenzhuo
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 02/15] net/avf: initialization of " Wenzhuo Lu
2018-01-05 20:29       ` Stephen Hemminger
2018-01-08  1:56         ` Lu, Wenzhuo
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 03/15] net/avf: enable queue and device Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 04/15] net/avf: enable basic Rx Tx func Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 05/15] net/avf: enable link status update Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 06/15] net/avf: support stats Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 07/15] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 08/15] net/avf: enable ops for RSS setting Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 09/15] net/avf: enable ops for MTU setting Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 10/15] net/avf: enable ops to check queue info and status Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 11/15] net/i40e: support AVF basic interface Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 12/15] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 13/15] net/avf: enable bulk allocate Rx func Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 14/15] net/avf: enable Rx interrupt support Wenzhuo Lu
2018-01-05  8:21     ` [dpdk-dev] [PATCH v4 15/15] doc: update doc for avf driver Wenzhuo Lu
2018-01-07 15:09       ` Zhang, Helin
2018-01-08  2:02         ` Lu, Wenzhuo
2018-01-08  5:13     ` [dpdk-dev] [PATCH v5 00/14] add new AVF PMD Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 02/14] net/avf: initialization of " Wenzhuo Lu
2018-01-09 17:58         ` Ferruh Yigit
2018-01-10  2:59           ` Lu, Wenzhuo
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 03/14] net/avf: enable queue and device Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 05/14] net/avf: enable link status update Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 06/14] net/avf: support stats Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 07/14] net/avf: enable ops for MAC VLAN offload Wenzhuo Lu
2018-01-09 17:58         ` Ferruh Yigit
2018-01-10  1:39           ` Lu, Wenzhuo
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
2018-01-09 17:58         ` Ferruh Yigit
2018-01-10  1:38           ` Lu, Wenzhuo
2018-01-10  9:57             ` Ferruh Yigit
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
2018-01-08  5:13       ` [dpdk-dev] [PATCH v5 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
2018-01-10  6:15       ` [dpdk-dev] [PATCH v6 00/14] dd new AVF PMD Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 02/14] net/avf: initialization of " Wenzhuo Lu
2018-01-10 17:15           ` Stephen Hemminger
2018-01-11  2:07             ` Lu, Wenzhuo
2018-01-11  8:53               ` Ferruh Yigit
2018-01-10 17:17           ` Stephen Hemminger
2018-01-11  4:52             ` Lu, Wenzhuo
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 03/14] net/avf: enable queue and device Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 05/14] net/avf: enable link status update Wenzhuo Lu
2018-01-10  9:44           ` Xing, Beilei
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 06/14] net/avf: support stats Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
2018-01-10  6:15         ` [dpdk-dev] [PATCH v6 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
2018-01-10  6:16         ` [dpdk-dev] [PATCH v6 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
2018-01-10 13:01         ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 01/14] net/avf/base: add base code for avf PMD Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 02/14] net/avf: initialization of " Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 03/14] net/avf: enable queue and device Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 04/14] net/avf: enable basic Rx Tx func Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 05/14] net/avf: enable link status update Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 06/14] net/avf: support stats Wenzhuo Lu
2018-01-10 13:01           ` [dpdk-dev] [PATCH v7 07/14] net/avf: enable MAC VLAN and promisc ops Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 08/14] net/avf: enable ops for RSS setting Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 09/14] net/avf: enable ops for MTU setting Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 10/14] net/avf: enable ops to check queue info and status Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 11/14] net/i40e: support AVF basic interface Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 12/14] net/avf: enable sse vector Rx Tx func Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 13/14] net/avf: enable bulk allocate Rx func Wenzhuo Lu
2018-01-10 13:02           ` [dpdk-dev] [PATCH v7 14/14] net/avf: enable Rx interrupt support Wenzhuo Lu
2018-01-10 19:14           ` [dpdk-dev] [PATCH v7 00/14] dd new AVF PMD Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).