DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver
@ 2020-12-19  7:54 Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 1/5] net/iavf_be: " Jingjing Wu
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

This series introduces a net device driver called iavfbe which is working
as datapath driver for emulated iavf type device. It provides basic function
following Intel® Ethernet Adaptive Virtual Function specification, including
recevice/transmit packets and virtchnl control messages handling.
The driver enabling work is based on the framework mentioned in:
  [RFC 0/2] Add device emulation support in DPDK
  http://patchwork.dpdk.org/cover/75549/

                    +------------------------------------------------------+
                    |   +---------------+      +---------------+           |
                    |   | iavf_emudev   |      | iavfbe_ethdev |           |
                    |   |    driver     |      |     driver    |           |
                    |   +---------------+      +---------------+           |
                    |           |                       |                  |
                    | ------------------------------------------- VDEV BUS |
                    |           |                       |                  |
                    |   +---------------+       +--------------+           |
+--------------+    |   | vdev:         |       | vdev:        |           |
| +----------+ |    |   | /path/to/vfio |       |iavf_emudev_# |           |
| | Generic  | |    |   +---------------+       +--------------+           |
| | vfio-dev | |    |           |                                          |
| +----------+ |    |           |                                          |
| +----------+ |    |      +----------+                                    |
| | vfio-user| |    |      | vfio-user|                                    |
| | client   | |<---|----->| server   |                                    |
| +----------+ |    |      +----------+                                    |
| QEMU/DPDK    |    | DPDK                                                 |
+--------------+    +------------------------------------------------------+


This series depends on patch serieses:
  [0/9] Introduce vfio-user library:
  http://patchwork.dpdk.org/cover/85389/
  [0/8]Introduce emudev library and iavf emudev driver
  http://patchwork.dpdk.org/cover/85488/

Jingjing Wu (5):
  net/iavf_be: introduce iavf backend driver
  net/iavf_be: control queue enabling
  net/iavf_be: virtchnl messages process
  net/iavf_be: add Rx Tx burst support
  doc: new net PMD iavf_be

 MAINTAINERS                            |    6 +
 doc/guides/nics/features/iavf_be.ini   |   11 +
 doc/guides/nics/iavf_be.rst            |   53 ++
 doc/guides/nics/index.rst              |    1 +
 doc/guides/rel_notes/release_21_02.rst |    6 +
 drivers/net/iavf_be/iavf_be.h          |  123 +++
 drivers/net/iavf_be/iavf_be_ethdev.c   |  961 +++++++++++++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.c     |  491 +++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.h     |  163 ++++
 drivers/net/iavf_be/iavf_be_vchnl.c    | 1084 ++++++++++++++++++++++++
 drivers/net/iavf_be/meson.build        |   14 +
 drivers/net/iavf_be/version.map        |    3 +
 drivers/net/meson.build                |    1 +
 13 files changed, 2917 insertions(+)
 create mode 100644 doc/guides/nics/features/iavf_be.ini
 create mode 100644 doc/guides/nics/iavf_be.rst
 create mode 100644 drivers/net/iavf_be/iavf_be.h
 create mode 100644 drivers/net/iavf_be/iavf_be_ethdev.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.h
 create mode 100644 drivers/net/iavf_be/iavf_be_vchnl.c
 create mode 100644 drivers/net/iavf_be/meson.build
 create mode 100644 drivers/net/iavf_be/version.map

-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v1 1/5] net/iavf_be: introduce iavf backend driver
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
@ 2020-12-19  7:54 ` Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 2/5] net/iavf_be: control queue enabling Jingjing Wu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu, Kun Qiu

Introduce driver for iavf backend vdev which is based on
vfio-user protocol and emudev libs.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Kun Qiu <kun.qiu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  40 ++++
 drivers/net/iavf_be/iavf_be_ethdev.c | 330 +++++++++++++++++++++++++++
 drivers/net/iavf_be/meson.build      |  12 +
 drivers/net/iavf_be/version.map      |   3 +
 drivers/net/meson.build              |   1 +
 5 files changed, 386 insertions(+)
 create mode 100644 drivers/net/iavf_be/iavf_be.h
 create mode 100644 drivers/net/iavf_be/iavf_be_ethdev.c
 create mode 100644 drivers/net/iavf_be/meson.build
 create mode 100644 drivers/net/iavf_be/version.map

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
new file mode 100644
index 0000000000..55f218afcd
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IAVF_BE_H_
+#define _IAVF_BE_H_
+
+/* Structure to store private data for backend instance*/
+struct iavfbe_adapter {
+	struct rte_eth_dev *eth_dev;
+	struct rte_emudev *emu_dev;
+	uint16_t edev_id;  /* Emulated Device ID */
+	struct rte_emudev_info dev_info;
+
+	uint16_t nb_qps;
+	bool link_up;
+	int cq_irqfd;
+	rte_atomic32_t irq_enable;
+
+	uint8_t unicast_promisc:1,
+		multicast_promisc:1,
+		vlan_filter:1,
+		vlan_strip:1;
+
+	int adapter_stopped;
+	uint8_t *reset; /* Reset status */
+	volatile int started;
+};
+
+/* IAVFBE_DEV_PRIVATE_TO */
+#define IAVFBE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct iavfbe_adapter *)adapter)
+
+int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+
+extern int iavfbe_logtype;
+#define IAVF_BE_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, iavfbe_logtype, "%s(): " fmt "\n", \
+		__func__, ## args)
+#endif /* _IAVF_BE_H_ */
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
new file mode 100644
index 0000000000..3d5ca34ec0
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -0,0 +1,330 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <unistd.h>
+#include <inttypes.h>
+
+#include <rte_kvargs.h>
+#include <rte_ethdev_driver.h>
+#include <rte_bus_vdev.h>
+#include <rte_ethdev_vdev.h>
+#include <rte_emudev.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include "iavf_be.h"
+
+#define AVFBE_EDEV_ID_ARG "emu"
+#define AVFBE_MAC_ARG "mac"
+
+int iavfbe_logtype;
+
+static const char *iavfbe_valid_arg[] = {
+	AVFBE_EDEV_ID_ARG,
+	AVFBE_MAC_ARG,
+	NULL
+};
+
+static struct rte_eth_link iavfbe_link = {
+	.link_speed = ETH_SPEED_NUM_NONE,
+	.link_duplex = ETH_LINK_FULL_DUPLEX,
+	.link_status = ETH_LINK_DOWN
+};
+
+static int iavfbe_dev_configure(struct rte_eth_dev *dev);
+static int iavfbe_dev_close(struct rte_eth_dev *dev);
+static int iavfbe_dev_start(struct rte_eth_dev *dev);
+static int iavfbe_dev_stop(struct rte_eth_dev *dev);
+static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
+				struct rte_eth_dev_info *dev_info);
+static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops iavfbe_eth_dev_ops = {
+	.dev_configure              = iavfbe_dev_configure,
+	.dev_close                  = iavfbe_dev_close,
+	.dev_start                  = iavfbe_dev_start,
+	.dev_stop                   = iavfbe_dev_stop,
+	.dev_infos_get              = iavfbe_dev_info_get,
+	.link_update                = iavfbe_dev_link_update,
+};
+
+static int
+iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_info *dev_info)
+{
+	dev_info->max_rx_queues = 0;
+	dev_info->max_tx_queues = 0;
+	dev_info->min_rx_bufsize = 0;
+	dev_info->max_rx_pktlen = 0;
+
+	return 0;
+}
+
+
+static int
+iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	/* Any configuration? */
+	return 0;
+}
+
+static int
+iavfbe_dev_start(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	adapter->adapter_stopped = 0;
+
+	return 0;
+}
+
+static int
+iavfbe_dev_stop(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (adapter->adapter_stopped == 1)
+		return 0;
+
+	adapter->adapter_stopped = 1;
+
+	return 0;
+}
+
+int
+iavfbe_dev_link_update(struct rte_eth_dev *dev,
+		       __rte_unused int wait_to_complete)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_eth_link new_link = dev->data->dev_link;
+
+	/* Only link status is updated */
+	new_link.link_status = ad->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+
+	if (rte_atomic64_cmpset((volatile uint64_t *)&dev->data->dev_link,
+				*(uint64_t *)&dev->data->dev_link,
+				*(uint64_t *)&new_link) == 0)
+		return -EAGAIN;
+
+	return 0;
+}
+
+static int
+iavfbe_dev_close(struct rte_eth_dev *dev)
+{
+	iavfbe_destroy_adapter(dev);
+	rte_eth_dev_release_port(dev);
+
+	return 0;
+}
+
+static inline int
+save_str(const char *key __rte_unused, const char *value,
+	void *extra_args)
+{
+	const char **str = extra_args;
+
+	if (value == NULL)
+		return -1;
+
+	*str = value;
+
+	return 0;
+}
+
+static inline int
+set_mac(const char *key __rte_unused, const char *value, void *extra_args)
+{
+	struct rte_ether_addr *ether_addr = (struct rte_ether_addr *)extra_args;
+
+	if (rte_ether_unformat_addr(value, ether_addr) < 0)
+		IAVF_BE_LOG(ERR, "Failed to parse mac '%s'.", value);
+	return 0;
+}
+
+static int
+iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
+		    struct rte_emudev *emu_dev,
+		    struct rte_ether_addr *ether_addr __rte_unused)
+{
+	struct iavfbe_adapter *adapter;
+	struct rte_iavf_emu_config *conf;
+	int ret;
+
+	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	adapter->eth_dev = eth_dev;
+	adapter->emu_dev = emu_dev;
+	adapter->edev_id = emu_dev->dev_id;
+	emu_dev->backend_priv = (void *)adapter;
+	rte_wmb();
+
+	conf = rte_zmalloc_socket("iavfbe", sizeof(*conf), 0,
+				  eth_dev->device->numa_node);
+	if (!conf) {
+		IAVF_BE_LOG(ERR, "Fail to allocate emulated "
+			"iavf configuration");
+		return -ENOMEM;
+	}
+	adapter->dev_info.dev_priv = (rte_emudev_obj_t)conf;
+
+	ret = rte_emudev_get_dev_info(emu_dev->dev_id, &adapter->dev_info);
+	if (ret)
+		goto err_info;
+
+	adapter->nb_qps = conf->qp_num;
+	return 0;
+
+err_info:
+	rte_free(conf);
+	return ret;
+}
+
+static void
+iavfbe_destroy_adapter(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (adapter->emu_dev) {
+		adapter->emu_dev->backend_priv = NULL;
+		rte_wmb();
+	}
+
+	rte_free(adapter->dev_info.dev_priv);
+}
+
+static int
+eth_dev_iavfbe_create(struct rte_vdev_device *dev,
+		      struct rte_emudev *emu_dev,
+		      struct rte_ether_addr *addr)
+{
+	struct rte_eth_dev *eth_dev = NULL;
+	struct iavfbe_adapter *adapter;
+	int ret = 0;
+
+	if (dev->device.numa_node == SOCKET_ID_ANY)
+		dev->device.numa_node = rte_socket_id();
+
+	IAVF_BE_LOG(INFO, "Creating iavfbe ethdev on numa socket %u\n",
+			dev->device.numa_node);
+
+	eth_dev = rte_eth_vdev_allocate(dev, sizeof(*adapter));
+	if (!eth_dev) {
+		IAVF_BE_LOG(ERR, "fail to allocate eth_dev\n");
+		ret = -ENOMEM;
+	}
+
+	iavfbe_init_adapter(eth_dev, emu_dev, addr);
+	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	/* Initializing default address with devarg */
+	eth_dev->data->mac_addrs =
+		rte_zmalloc_socket(rte_vdev_device_name(dev),
+				   sizeof(struct rte_ether_addr), 0,
+				   dev->device.numa_node);
+	if (eth_dev->data->mac_addrs == NULL) {
+		IAVF_BE_LOG(ERR, "fail to allocate eth_addr\n");
+		ret = -ENOMEM;
+	}
+	rte_ether_addr_copy(addr, &eth_dev->data->mac_addrs[0]);
+
+	eth_dev->dev_ops = &iavfbe_eth_dev_ops;
+
+	eth_dev->data->dev_link = iavfbe_link;
+	eth_dev->data->numa_node = dev->device.numa_node;
+
+	rte_eth_dev_probing_finish(eth_dev);
+
+	return ret;
+}
+
+static int
+rte_pmd_iavfbe_probe(struct rte_vdev_device *dev)
+{
+	struct rte_kvargs *kvlist = NULL;
+	struct rte_emudev *emu_dev;
+	const char *emudev_name;
+	struct rte_ether_addr ether_addr;
+	int ret = 0;
+
+	if (!dev)
+		return -EINVAL;
+
+	IAVF_BE_LOG(INFO, "Initializing pmd_iavfbe for %s\n",
+		    dev->device.name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), iavfbe_valid_arg);
+	if (kvlist == NULL)
+		return -1;
+
+	if (rte_kvargs_count(kvlist, AVFBE_EDEV_ID_ARG) == 1) {
+		ret = rte_kvargs_process(kvlist, AVFBE_EDEV_ID_ARG,
+					 &save_str, &emudev_name);
+		if (ret < 0)
+			goto free_kvlist;
+	} else {
+		ret = -EINVAL;
+		goto free_kvlist;
+	}
+
+	if (rte_kvargs_count(kvlist, AVFBE_MAC_ARG) == 1) {
+		ret = rte_kvargs_process(kvlist, AVFBE_MAC_ARG,
+					 &set_mac, &ether_addr);
+		if (ret < 0)
+			goto free_kvlist;
+	} else
+		rte_eth_random_addr(&ether_addr.addr_bytes[0]);
+
+	emu_dev = rte_emudev_allocated(emudev_name);
+	if (!emu_dev || strcmp(emu_dev->dev_info.dev_type, RTE_IAVF_EMUDEV_TYPE)) {
+		IAVF_BE_LOG(ERR, "emulated device isn't avf device\n");
+		ret = -EINVAL;
+		goto free_kvlist;
+	}
+
+	ret = eth_dev_iavfbe_create(dev, emu_dev, &ether_addr);
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+rte_pmd_iavfbe_remove(struct rte_vdev_device *dev)
+{
+	const char *name;
+	struct rte_eth_dev *eth_dev = NULL;
+
+	name = rte_vdev_device_name(dev);
+
+	eth_dev = rte_eth_dev_allocated(name);
+	if (!eth_dev)
+		return 0;
+
+	iavfbe_dev_close(eth_dev);
+
+	return 0;
+}
+
+static struct rte_vdev_driver pmd_iavfbe_drv = {
+	.probe = rte_pmd_iavfbe_probe,
+	.remove = rte_pmd_iavfbe_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(net_iavfbe, pmd_iavfbe_drv);
+RTE_PMD_REGISTER_ALIAS(net_iavfbe, eth_iavfbe);
+RTE_PMD_REGISTER_PARAM_STRING(net_iavfbe,
+			      AVFBE_EDEV_ID_ARG "=<str>"
+			      AVFBE_MAC_ARG "=xx:xx:xx:xx:xx:xx");
+
+RTE_INIT(iavfbe_init_log)
+{
+	iavfbe_logtype = rte_log_register("pmd.net.iavfbe");
+	if (iavfbe_logtype >= 0)
+		rte_log_set_level(iavfbe_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
new file mode 100644
index 0000000000..24c625fa18
--- /dev/null
+++ b/drivers/net/iavf_be/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+cflags += ['-Wno-strict-aliasing']
+
+includes += include_directories('../../common/iavf')
+
+deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
+
+sources = files(
+	'iavf_be_ethdev.c',
+)
diff --git a/drivers/net/iavf_be/version.map b/drivers/net/iavf_be/version.map
new file mode 100644
index 0000000000..4a76d1d52d
--- /dev/null
+++ b/drivers/net/iavf_be/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+	local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 29f4777500..4676ef4b3e 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -24,6 +24,7 @@ drivers = ['af_packet',
 	'hinic',
 	'hns3',
 	'iavf',
+	'iavf_be',
 	'ice',
 	'igc',
 	'ipn3ke',
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v1 2/5] net/iavf_be: control queue enabling
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 1/5] net/iavf_be: " Jingjing Wu
@ 2020-12-19  7:54 ` Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 3/5] net/iavf_be: virtchnl messages process Jingjing Wu
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

1. Set up control rx/tx queues.
2. Emu device callback functions implemention.
3. Enabling recv/send msg through control queue.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  39 ++++
 drivers/net/iavf_be/iavf_be_ethdev.c | 321 ++++++++++++++++++++++++++-
 drivers/net/iavf_be/iavf_be_vchnl.c  | 273 +++++++++++++++++++++++
 drivers/net/iavf_be/meson.build      |   1 +
 4 files changed, 632 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/iavf_be/iavf_be_vchnl.c

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
index 55f218afcd..aff7bb9c56 100644
--- a/drivers/net/iavf_be/iavf_be.h
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -5,13 +5,49 @@
 #ifndef _IAVF_BE_H_
 #define _IAVF_BE_H_
 
+#define IAVF_BE_AQ_LEN               32
+#define IAVF_BE_AQ_BUF_SZ            4096
+#define IAVF_BE_32_TO_64(hi, lo) ((((uint64_t)(hi)) << 32) + (lo))
+
+#define IAVFBE_READ_32(addr)        \
+	rte_le_to_cpu_32(*(volatile uint32_t *)(addr))
+#define IAVFBE_WRITE_32(addr, val)  \
+	*(volatile uint32_t *)(addr) = rte_cpu_to_le_32(val);
+
+struct iavfbe_control_q {
+	rte_spinlock_t access_lock;
+	struct rte_emudev_q_info q_info;
+	struct iavf_aq_desc *ring;
+	uint64_t p_ring_addr;	/* Guest physical address of the ring */
+	uint16_t len;
+	volatile uint8_t *tail;
+	volatile uint8_t *head;
+
+	uint16_t next_to_use;
+	uint16_t next_to_clean;
+
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_req;     /* buffer to store the adminq request from VF, NULL if arq */
+};
+
+/* Control queue structure of iavf */
+struct iavfbe_controlq_info {
+	struct iavfbe_control_q asq;
+	struct iavfbe_control_q arq;
+};
+
 /* Structure to store private data for backend instance*/
 struct iavfbe_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct rte_emudev *emu_dev;
 	uint16_t edev_id;  /* Emulated Device ID */
 	struct rte_emudev_info dev_info;
+	struct rte_iavf_emu_mem *mem_table;
 
+	struct iavfbe_controlq_info cq_info; /* Control/Admin Queue info*/
+	/* Adminq handle thread info */
+	volatile int thread_status;
+	pthread_t thread_id;
 	uint16_t nb_qps;
 	bool link_up;
 	int cq_irqfd;
@@ -32,6 +68,9 @@ struct iavfbe_adapter {
 	((struct iavfbe_adapter *)adapter)
 
 int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+void iavfbe_handle_virtchnl_msg(void *arg);
+void iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock);
+void iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock);
 
 extern int iavfbe_logtype;
 #define IAVF_BE_LOG(level, fmt, args...) \
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 3d5ca34ec0..2ab66f889d 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -14,6 +14,7 @@
 #include <rte_iavf_emu.h>
 
 #include <iavf_type.h>
+#include <virtchnl.h>
 #include "iavf_be.h"
 
 #define AVFBE_EDEV_ID_ARG "emu"
@@ -33,6 +34,12 @@ static struct rte_eth_link iavfbe_link = {
 	.link_status = ETH_LINK_DOWN
 };
 
+static int iavfbe_new_device(struct rte_emudev *dev);
+static void iavfbe_destroy_device(struct rte_emudev *dev);
+static int iavfbe_update_device(struct rte_emudev *dev);
+static int iavfbe_lock_dp(struct rte_emudev *dev, int lock);
+static int iavfbe_reset_device(struct rte_emudev *dev);
+
 static int iavfbe_dev_configure(struct rte_eth_dev *dev);
 static int iavfbe_dev_close(struct rte_eth_dev *dev);
 static int iavfbe_dev_start(struct rte_eth_dev *dev);
@@ -41,6 +48,16 @@ static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
 				struct rte_eth_dev_info *dev_info);
 static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
 
+struct rte_iavf_emu_notify_ops iavfbe_notify_ops = {
+	.device_ready = iavfbe_new_device,
+	.device_destroy = iavfbe_destroy_device,
+	.update_status = iavfbe_update_device,
+	.device_start = NULL,
+	.device_stop = NULL,
+	.lock_dp = iavfbe_lock_dp,
+	.reset_device = iavfbe_reset_device,
+};
+
 static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 	.dev_configure              = iavfbe_dev_configure,
 	.dev_close                  = iavfbe_dev_close,
@@ -51,7 +68,8 @@ static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 };
 
 static int
-iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_info *dev_info)
+iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,
+		    struct rte_eth_dev_info *dev_info)
 {
 	dev_info->max_rx_queues = 0;
 	dev_info->max_tx_queues = 0;
@@ -61,7 +79,6 @@ iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_i
 	return 0;
 }
 
-
 static int
 iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
@@ -122,6 +139,241 @@ iavfbe_dev_close(struct rte_eth_dev *dev)
 	return 0;
 }
 
+/* Called when emulation device is ready */
+static int
+iavfbe_new_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	struct rte_emudev_irq_info irq_info;
+	struct rte_emudev_q_info q_info;
+	struct rte_emudev_db_info db_info;
+	uint64_t addr;
+	uint16_t i;
+
+	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	for (i = 0; i < RTE_IAVF_EMU_ADMINQ_NUM; i++) {
+		if (rte_emudev_get_queue_info(dev->dev_id, i, &q_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get queue info of qid %d\n", i);
+			return -1;
+		}
+		/*
+		 * Only doorbell of LANQ is viable when device ready.
+		 * Other info of LANQ is acquired through virtchnl.
+		 *
+		 * AdminQ's irq and doorbell will both be ready in this stage.
+		 */
+		if (rte_emudev_get_db_info(dev->dev_id, q_info.doorbell_id,
+					   &db_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get doorbell info of qid %d\n", i);
+			return -1;
+		}
+
+		/* Only support memory based doorbell for now */
+		if (db_info.flag & RTE_EMUDEV_DB_FD ||
+			db_info.data.mem.size != 4)
+			return -1;
+
+		if (i == RTE_IAVF_EMU_ADMINQ_TXQ) {
+			adapter->cq_info.asq.tail =
+				(uint8_t *)db_info.data.mem.base;
+		} else {
+			adapter->cq_info.arq.tail =
+				(uint8_t *)db_info.data.mem.base;
+
+			if (rte_emudev_get_irq_info(dev->dev_id,
+				q_info.irq_vector, &irq_info)) {
+				IAVF_BE_LOG(ERR,
+					"Can not get irq info of qid %d\n", i);
+				return -1;
+			}
+
+			adapter->cq_irqfd = irq_info.eventfd;
+		}
+	}
+
+	/* Lan queue info would be set when queue setup */
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_ASQ_HEAD,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get asq head\n");
+		return -1;
+	}
+	adapter->cq_info.asq.head = (uint8_t *)(uintptr_t)addr;
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_ARQ_HEAD,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->cq_info.arq.head = (uint8_t *)(uintptr_t)addr;
+
+	iavfbe_reset_asq(adapter, false);
+	iavfbe_reset_arq(adapter, false);
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_RESET,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->reset = (uint8_t *)(uintptr_t)addr;
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_COMPLETED);
+	adapter->started = 1;
+	printf("NEW DEVICE: memtable, %p\n", adapter->mem_table);
+
+	return 0;
+}
+
+static void
+iavfbe_destroy_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* TODO: Disable all lan queues */
+
+	/* update link status */
+	adapter->link_up = false;
+	iavfbe_dev_link_update(adapter->eth_dev, 0);
+}
+
+static int
+iavfbe_update_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	struct rte_emudev_q_info q_info;
+	struct rte_emudev_irq_info irq_info;
+
+	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	if (rte_emudev_get_queue_info(dev->dev_id,
+		RTE_IAVF_EMU_ADMINQ_RXQ, &q_info)) {
+		IAVF_BE_LOG(ERR, "Can not get queue info of qid %d\n",
+			RTE_IAVF_EMU_ADMINQ_RXQ);
+		return -1;
+	}
+
+	if (rte_emudev_get_irq_info(dev->dev_id, q_info.irq_vector, &irq_info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info of qid %d\n",
+			RTE_IAVF_EMU_ADMINQ_RXQ);
+		return -1;
+	}
+
+	/* TODO: Lan queue info update */
+	adapter->cq_irqfd = irq_info.eventfd;
+	rte_atomic32_set(&adapter->irq_enable, irq_info.enable);
+
+	return 0;
+}
+
+static int
+iavfbe_lock_dp(struct rte_emudev *dev, int lock)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* Acquire/Release lock of control queue and lan queue */
+
+	if (lock) {
+		/* TODO: Lan queue lock */
+		rte_spinlock_lock(&adapter->cq_info.asq.access_lock);
+		rte_spinlock_lock(&adapter->cq_info.arq.access_lock);
+	} else {
+		/* TODO: Lan queue unlock */
+		rte_spinlock_unlock(&adapter->cq_info.asq.access_lock);
+		rte_spinlock_unlock(&adapter->cq_info.arq.access_lock);
+	}
+
+	return 0;
+}
+
+void
+iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock)
+{
+	struct iavfbe_control_q *q;
+
+	q = &adapter->cq_info.asq;
+
+	if (lock)
+		rte_spinlock_lock(&q->access_lock);
+
+	if (q->aq_req)
+		memset(q->aq_req, 0, IAVF_BE_AQ_BUF_SZ);
+	memset(&q->q_info, 0, sizeof(q->q_info));
+	q->ring = NULL;
+	q->p_ring_addr = 0;
+	q->len = 0;
+	q->next_to_clean = 0;
+	q->cmd_retval = 0;
+	if (q->head)
+		IAVFBE_WRITE_32(q->head, 0);
+
+	/* Do not reset tail as it init by FE */
+
+	if (lock)
+		rte_spinlock_unlock(&q->access_lock);
+
+}
+
+void
+iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock)
+{
+	struct iavfbe_control_q *q;
+
+	q = &adapter->cq_info.arq;
+
+	if (lock)
+		rte_spinlock_lock(&q->access_lock);
+
+	memset(&q->q_info, 0, sizeof(q->q_info));
+	q->ring = NULL;
+	q->p_ring_addr = 0;
+	q->len = 0;
+	q->next_to_use = 0;
+	if (q->head)
+		IAVFBE_WRITE_32(q->head, 0);
+
+	/* Do not reset tail as it init by FE */
+
+	if (lock)
+		rte_spinlock_unlock(&q->access_lock);
+
+}
+
+static int
+iavfbe_reset_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* Lock has been acquired by lock_dp */
+	/* TODO: reset all queues */
+	iavfbe_reset_asq(adapter, false);
+	iavfbe_reset_arq(adapter, false);
+
+	adapter->link_up = 0;
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+	adapter->cq_irqfd = -1;
+	adapter->adapter_stopped = 1;
+
+	return 0;
+}
+
 static inline int
 save_str(const char *key __rte_unused, const char *value,
 	void *extra_args)
@@ -146,6 +398,34 @@ set_mac(const char *key __rte_unused, const char *value, void *extra_args)
 	return 0;
 }
 
+static int
+iavfbe_driver_admq_session_start(struct rte_eth_dev *eth_dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	int ret;
+
+	adapter->thread_status = 1;
+	ret = pthread_create(&adapter->thread_id, NULL,
+			     (void *)iavfbe_handle_virtchnl_msg,
+			     eth_dev);
+	if (ret) {
+		IAVF_BE_LOG(ERR, "Can't create a thread\n");
+		adapter->thread_status = 0;
+	}
+	return ret;
+}
+
+static void
+iavfbe_driver_admq_session_stop(struct rte_eth_dev *eth_dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	adapter->thread_status = 0;
+	pthread_join(adapter->thread_id, NULL);
+}
+
 static int
 iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 		    struct rte_emudev *emu_dev,
@@ -177,8 +457,44 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 		goto err_info;
 
 	adapter->nb_qps = conf->qp_num;
+
+	adapter->cq_info.asq.aq_req =
+		rte_zmalloc_socket("iavfbe", IAVF_BE_AQ_BUF_SZ, 0,
+				   eth_dev->device->numa_node);
+	if (!adapter->cq_info.asq.aq_req) {
+		IAVF_BE_LOG(ERR, "Fail to allocate buffer for"
+				 " control queue request");
+		ret = -ENOMEM;
+		goto err_aq;
+	}
+
+	/* Init lock */
+	rte_spinlock_init(&adapter->cq_info.asq.access_lock);
+	rte_spinlock_init(&adapter->cq_info.arq.access_lock);
+
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+
+	/* No need to map region or init admin queue here now. They would be
+	 * done when emu device is ready.*/
+
+	/* Currently RSS is not necessary for device emulator */
+
+	/* Subscribe event from emulated avf device */
+	rte_emudev_subscribe_event(emu_dev->dev_id, &iavfbe_notify_ops);
+
+	/* Create a thread for virtchnnl command process */
+	if (iavfbe_driver_admq_session_start(eth_dev)) {
+		IAVF_BE_LOG(ERR, "iavfbe driver adminq session start failed");
+		goto err_thread;
+	}
+
 	return 0;
 
+err_thread:
+err_aq:
 err_info:
 	rte_free(conf);
 	return ret;
@@ -190,6 +506,7 @@ iavfbe_destroy_adapter(struct rte_eth_dev *dev)
 	struct iavfbe_adapter *adapter =
 		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 
+	iavfbe_driver_admq_session_stop(dev);
 	if (adapter->emu_dev) {
 		adapter->emu_dev->backend_priv = NULL;
 		rte_wmb();
diff --git a/drivers/net/iavf_be/iavf_be_vchnl.c b/drivers/net/iavf_be/iavf_be_vchnl.c
new file mode 100644
index 0000000000..646c967252
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_vchnl.c
@@ -0,0 +1,273 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/mman.h>
+#include <sys/eventfd.h>
+
+#include <rte_kvargs.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_dev.h>
+#include <rte_emudev.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include <virtchnl.h>
+
+#include "iavf_be.h"
+
+__rte_unused  static int
+iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
+			uint32_t opcode,
+			uint32_t retval,
+			uint8_t *msg,
+			uint16_t msglen)
+{
+	struct iavfbe_control_q *arq = &adapter->cq_info.arq;
+	struct iavf_aq_desc *desc;
+	enum iavf_status status = IAVF_SUCCESS;
+	uint32_t dma_buff_low, dma_buff_high;
+	uint16_t ntu;
+
+	if (msglen > IAVF_BE_AQ_BUF_SZ) {
+		IAVF_BE_LOG(ERR, "ARQ: msg is tool long: %u\n", msglen);
+		status = IAVF_ERR_INVALID_SIZE;
+		goto arq_send_error;
+	}
+
+	rte_spinlock_lock(&arq->access_lock);
+
+	ntu = arq->next_to_use;
+	if (ntu == IAVFBE_READ_32(arq->tail)) {
+		IAVF_BE_LOG(ERR, "ARQ: No free desc\n");
+		status = IAVF_ERR_QUEUE_EMPTY;
+		goto arq_send_error;
+	}
+	desc = &arq->ring[ntu];
+	dma_buff_low = LE32_TO_CPU(desc->params.external.addr_low);
+	dma_buff_high = LE32_TO_CPU(desc->params.external.addr_high);
+
+	/* Prepare descriptor */
+	memset((void *)desc, 0, sizeof(struct iavf_aq_desc));
+	desc->opcode = CPU_TO_LE16(iavf_aqc_opc_send_msg_to_vf);
+
+	desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_SI);
+	desc->cookie_high = CPU_TO_LE32(opcode);
+	desc->cookie_low = CPU_TO_LE32(retval);
+
+	if (msg && msglen) {
+		void *buf_va;
+		uint64_t buf_sz = msglen;
+
+		desc->flags |= CPU_TO_LE16((uint16_t)(IAVF_AQ_FLAG_BUF
+						| IAVF_AQ_FLAG_RD));
+		if (msglen > IAVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16((uint16_t)IAVF_AQ_FLAG_LB);
+		desc->datalen = CPU_TO_LE16(msglen);
+
+		buf_va = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+			adapter->mem_table,
+			IAVF_BE_32_TO_64(dma_buff_high, dma_buff_low),
+			&buf_sz);
+		if (buf_sz != msglen)
+			goto arq_send_error;
+
+		rte_memcpy(buf_va, msg, msglen);
+	}
+	rte_wmb();
+
+	ntu++;
+	if (ntu == arq->len)
+		ntu = 0;
+	arq->next_to_use = ntu;
+	IAVFBE_WRITE_32(arq->head, arq->next_to_use);
+
+arq_send_error:
+	rte_spinlock_unlock(&arq->access_lock);
+	return status;
+}
+
+/* Read data in admin queue to get msg from vf driver */
+static enum iavf_status
+iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
+			struct iavf_arq_event_info *event)
+{
+	struct iavfbe_control_q *asq = &adapter->cq_info.asq;
+	struct iavf_aq_desc *desc;
+	enum virtchnl_ops opcode;
+	uint16_t ntc;
+	uint16_t datalen;
+	uint16_t flags;
+	int ret = IAVF_SUCCESS;
+
+	rte_spinlock_lock(&asq->access_lock);
+
+	ntc = asq->next_to_clean;
+
+	/* pre-clean the event info */
+	memset(&event->desc, 0, sizeof(event->desc));
+	event->msg_len = 0;
+
+	if (ntc == IAVFBE_READ_32(asq->tail)) {
+		/* nothing to do  */
+		ret = IAVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto end;
+	}
+	/* now get the next descriptor */
+	desc = &asq->ring[ntc];
+	rte_memcpy(&event->desc, desc, sizeof(struct iavf_aq_desc));
+	flags = LE16_TO_CPU(desc->flags);
+	datalen = LE16_TO_CPU(desc->datalen);
+	if (flags & IAVF_AQ_FLAG_RD) {
+		if (datalen > event->buf_len) {
+			ret = IAVF_ERR_BUF_TOO_SHORT;
+			goto end;
+		} else {
+			uint32_t reg1 = 0;
+			uint32_t reg2 = 0;
+			void *buf_va;
+			uint64_t buf_sz = datalen;
+
+			event->msg_len = datalen;
+			reg1 = LE32_TO_CPU(desc->params.external.addr_low);
+			reg2 = LE32_TO_CPU(desc->params.external.addr_high);
+			buf_va = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					IAVF_BE_32_TO_64(reg2, reg1), &buf_sz);
+			rte_memcpy(event->msg_buf, buf_va, event->msg_len);
+		}
+	}
+
+	ntc++;
+	if (ntc == asq->len)
+		ntc = 0;
+	asq->next_to_clean = ntc;
+
+	/* Write back to head and Desc with Flags.DD and Flags.CMP */
+	desc->flags |= IAVF_AQ_FLAG_DD | IAVF_AQ_FLAG_CMP;
+	rte_wmb();
+
+	IAVFBE_WRITE_32(asq->head, asq->next_to_clean);
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event->desc.cookie_high);
+	asq->cmd_retval = (enum virtchnl_status_code)
+				rte_le_to_cpu_32(event->desc.cookie_low);
+
+	IAVF_BE_LOG(DEBUG, "AQ from pf carries opcode %u,virtchnl_op %u retval %d",
+		    event->desc.opcode, opcode, asq->cmd_retval);
+end:
+	rte_spinlock_unlock(&asq->access_lock);
+
+	return ret;
+}
+
+static inline int
+iavfbe_control_queue_remap(struct iavfbe_adapter *adapter,
+			  struct iavfbe_control_q *asq,
+			  struct iavfbe_control_q *arq)
+{
+	struct rte_emudev_q_info *asq_info;
+	struct rte_emudev_q_info *arq_info;
+	uint64_t len;
+	int ret;
+
+	asq_info = &adapter->cq_info.asq.q_info;
+	arq_info = &adapter->cq_info.arq.q_info;
+
+	ret = rte_emudev_get_queue_info(adapter->edev_id,
+				     RTE_IAVF_EMU_ADMINQ_TXQ,
+				     asq_info);
+	if (ret)
+		return IAVF_ERR_NOT_READY;
+
+	ret = rte_emudev_get_queue_info(adapter->edev_id,
+					RTE_IAVF_EMU_ADMINQ_RXQ,
+					arq_info);
+	if (ret)
+		return IAVF_ERR_NOT_READY;
+
+	rte_spinlock_lock(&asq->access_lock);
+
+	asq->p_ring_addr = asq_info->base;
+	asq->len = asq_info->size;
+	len = asq->len * sizeof(struct iavf_aq_desc);
+	asq->ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					asq->p_ring_addr, &len);
+	rte_spinlock_unlock(&asq->access_lock);
+
+	rte_spinlock_lock(&arq->access_lock);
+	arq->p_ring_addr = arq_info->base;
+	arq->len = arq_info->size;
+	len = arq->len * sizeof(struct iavf_aq_desc);
+	arq->ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					arq->p_ring_addr, &len);
+	rte_spinlock_unlock(&arq->access_lock);
+
+	return 0;
+}
+
+void
+iavfbe_handle_virtchnl_msg(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_control_q *arq = &adapter->cq_info.arq;
+	struct iavfbe_control_q *asq = &adapter->cq_info.asq;
+	struct iavf_arq_event_info info;
+	uint16_t aq_opc;
+	int ret;
+
+	info.buf_len = IAVF_BE_AQ_BUF_SZ;
+	info.msg_buf = adapter->cq_info.asq.aq_req;
+
+	while (adapter->thread_status) {
+		rte_delay_us_sleep(3000); /* sleep for 3 ms*/
+		/* Check if control queue is initilized */
+		if (adapter->started == 0)
+			continue;
+
+		/* remap every time */
+		ret = iavfbe_control_queue_remap(adapter, asq, arq);
+		if (ret ||
+		    !(asq->p_ring_addr && asq->len && asq->ring) ||
+		    !(arq->p_ring_addr && arq->len && arq->ring))
+			continue;
+
+		if (asq->next_to_clean == IAVFBE_READ_32(asq->tail))
+			/* nothing to do  */
+			continue;
+
+		ret = iavfbe_read_msg_from_vf(adapter, &info);
+		if (ret != IAVF_SUCCESS) {
+			IAVF_BE_LOG(DEBUG, "Failed to read msg"
+				    "from AdminQ");
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+
+		switch (aq_opc) {
+		case iavf_aqc_opc_send_msg_to_pf:
+			/* Process msg from VF BE*/
+			break;
+		case iavf_aqc_opc_queue_shutdown:
+			iavfbe_reset_arq(adapter, true);
+			break;
+		case 0:
+			IAVF_BE_LOG(DEBUG, "NULL Request ignored");
+			break;
+		default:
+			IAVF_BE_LOG(ERR, "Unexpected Request 0x%04x ignored ",
+				    aq_opc);
+			break;
+		}
+	}
+	pthread_exit(0);
+}
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
index 24c625fa18..be13a2e492 100644
--- a/drivers/net/iavf_be/meson.build
+++ b/drivers/net/iavf_be/meson.build
@@ -9,4 +9,5 @@ deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
 
 sources = files(
 	'iavf_be_ethdev.c',
+	'iavf_be_vchnl.c',
 )
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v1 3/5] net/iavf_be: virtchnl messages process
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 1/5] net/iavf_be: " Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 2/5] net/iavf_be: control queue enabling Jingjing Wu
@ 2020-12-19  7:54 ` Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 4/5] net/iavf_be: add Rx Tx burst support Jingjing Wu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

1. Process virtchnl messages from Front End.
2. Ethdev ops implemention for queues setup.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  44 ++
 drivers/net/iavf_be/iavf_be_ethdev.c | 335 ++++++++++-
 drivers/net/iavf_be/iavf_be_rxtx.c   | 162 ++++++
 drivers/net/iavf_be/iavf_be_rxtx.h   | 103 ++++
 drivers/net/iavf_be/iavf_be_vchnl.c  | 815 ++++++++++++++++++++++++++-
 drivers/net/iavf_be/meson.build      |   1 +
 6 files changed, 1446 insertions(+), 14 deletions(-)
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.h

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
index aff7bb9c56..6e8774ebb0 100644
--- a/drivers/net/iavf_be/iavf_be.h
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -9,6 +9,30 @@
 #define IAVF_BE_AQ_BUF_SZ            4096
 #define IAVF_BE_32_TO_64(hi, lo) ((((uint64_t)(hi)) << 32) + (lo))
 
+/* Default setting on number of VSIs that VF can contain */
+#define IAVF_BE_DEFAULT_VSI_NUM     1
+#define AVF_DEFAULT_MAX_MTU         1500
+/* Set the MAX vectors and queus to 16,
+ * as base mode virtchnl support 16 queue pairs mapping in max.
+ */
+#define IAVF_BE_MAX_NUM_QUEUES      16
+#define IAVF_BE_MAX_VECTORS         16
+#define IAVF_BE_BUF_SIZE_MIN        1024
+#define IAVF_BE_FRAME_SIZE_MAX      9728
+#define IAVF_BE_NUM_MACADDR_MAX     64
+
+/* The overhead from MTU to max frame size.
+ * Considering QinQ packet, the VLAN tag needs to be counted twice.
+ */
+#define AVF_VLAN_TAG_SIZE           4
+#define AVF_ETH_OVERHEAD \
+	(ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2)
+
+/* Structure that defines a VSI, associated with a adapter. */
+
+/* Assume the max number be 16 right now */
+#define AVF_MAX_MSIX_VECTORS        16
+
 #define IAVFBE_READ_32(addr)        \
 	rte_le_to_cpu_32(*(volatile uint32_t *)(addr))
 #define IAVFBE_WRITE_32(addr, val)  \
@@ -48,8 +72,15 @@ struct iavfbe_adapter {
 	/* Adminq handle thread info */
 	volatile int thread_status;
 	pthread_t thread_id;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* Resource to VF */
+	/* Pointer to array of queue pairs info. */
+	struct virtchnl_queue_pair_info *qps;
 	uint16_t nb_qps;
+	uint16_t nb_used_qps;
 	bool link_up;
+	struct virtchnl_eth_stats eth_stats; /* Stats to VF */
 	int cq_irqfd;
 	rte_atomic32_t irq_enable;
 
@@ -67,11 +98,24 @@ struct iavfbe_adapter {
 #define IAVFBE_DEV_PRIVATE_TO_ADAPTER(adapter) \
 	((struct iavfbe_adapter *)adapter)
 
+void iavfbe_reset_all_queues(struct iavfbe_adapter *adapter);
 int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+int iavfbe_lock_lanq(struct iavfbe_adapter *adapter);
+int iavfbe_unlock_lanq(struct iavfbe_adapter *adapter);
+void iavfbe_notify_vf_reset(struct iavfbe_adapter *adapter);
+void iavfbe_notify(struct iavfbe_adapter *adapter);
 void iavfbe_handle_virtchnl_msg(void *arg);
 void iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock);
 void iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock);
 
+static inline uint64_t stats_update(uint64_t offset, uint64_t stat)
+{
+	if (stat >= offset)
+		return (stat - offset);
+	else
+		return (uint64_t)(((uint64_t)-1) - offset + stat + 1);
+}
+
 extern int iavfbe_logtype;
 #define IAVF_BE_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, iavfbe_logtype, "%s(): " fmt "\n", \
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 2ab66f889d..e809f52312 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -16,6 +16,7 @@
 #include <iavf_type.h>
 #include <virtchnl.h>
 #include "iavf_be.h"
+#include "iavf_be_rxtx.h"
 
 #define AVFBE_EDEV_ID_ARG "emu"
 #define AVFBE_MAC_ARG "mac"
@@ -46,6 +47,8 @@ static int iavfbe_dev_start(struct rte_eth_dev *dev);
 static int iavfbe_dev_stop(struct rte_eth_dev *dev);
 static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
 				struct rte_eth_dev_info *dev_info);
+static int iavfbe_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
+static int iavfbe_stats_reset(struct rte_eth_dev *dev);
 static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
 
 struct rte_iavf_emu_notify_ops iavfbe_notify_ops = {
@@ -64,17 +67,80 @@ static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 	.dev_start                  = iavfbe_dev_start,
 	.dev_stop                   = iavfbe_dev_stop,
 	.dev_infos_get              = iavfbe_dev_info_get,
+	.rx_queue_setup             = iavfbe_dev_rx_queue_setup,
+	.tx_queue_setup             = iavfbe_dev_tx_queue_setup,
+	.rx_queue_release           = iavfbe_dev_rx_queue_release,
+	.tx_queue_release           = iavfbe_dev_tx_queue_release,
+	.rxq_info_get               = iavfbe_dev_rxq_info_get,
+	.txq_info_get               = iavfbe_dev_txq_info_get,
 	.link_update                = iavfbe_dev_link_update,
+	.stats_get                  = iavfbe_stats_get,
+	.stats_reset                = iavfbe_stats_reset,
 };
 
 static int
-iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,
-		    struct rte_eth_dev_info *dev_info)
+iavfbe_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
-	dev_info->max_rx_queues = 0;
-	dev_info->max_tx_queues = 0;
-	dev_info->min_rx_bufsize = 0;
-	dev_info->max_rx_pktlen = 0;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint64_t tx_pkts = 0;
+	uint64_t tx_bytes = 0;
+	uint64_t tx_missed = 0;
+	uint64_t rx_pkts = 0;
+	uint64_t rx_bytes = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		rx_pkts += stats_update(rxq->stats_off.recv_pkt_num,
+					rxq->stats.recv_pkt_num);
+		rx_bytes += stats_update(rxq->stats_off.recv_bytes,
+					 rxq->stats.recv_bytes);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		tx_pkts += stats_update(txq->stats_off.sent_pkt_num,
+					txq->stats.sent_pkt_num);
+		tx_bytes += stats_update(txq->stats_off.sent_bytes,
+					 txq->stats.sent_bytes);
+		tx_missed += stats_update(txq->stats_off.sent_miss_num,
+					  txq->stats.sent_miss_num);
+	}
+
+	stats->ipackets = rx_pkts;
+	stats->opackets = tx_pkts;
+	stats->oerrors = tx_missed;
+	stats->ibytes = rx_bytes;
+	stats->obytes = tx_bytes;
+
+	return 0;
+}
+
+static int
+iavfbe_stats_reset(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	unsigned i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		rxq->stats_off = rxq->stats;
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		txq->stats_off = txq->stats;
+	}
 
 	return 0;
 }
@@ -86,6 +152,84 @@ iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static int
+iavfbe_start_queues(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint32_t i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq || rte_atomic32_read(&txq->enable) != 0)
+			continue;
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq || rte_atomic32_read(&rxq->enable) != 0)
+			continue;
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return 0;
+}
+
+static void
+iavfbe_stop_queues(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
+static int
+iavfbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	dev_info->max_rx_queues = adapter->nb_qps;
+	dev_info->max_tx_queues = adapter->nb_qps;
+	dev_info->min_rx_bufsize = IAVF_BE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = IAVF_BE_FRAME_SIZE_MAX;
+	dev_info->max_mac_addrs = IAVF_BE_NUM_MACADDR_MAX;
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = IAVF_BE_MAX_RING_DESC,
+		.nb_min = IAVF_BE_MIN_RING_DESC,
+		.nb_align = IAVF_BE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = IAVF_BE_MAX_RING_DESC,
+		.nb_min = IAVF_BE_MIN_RING_DESC,
+		.nb_align = IAVF_BE_ALIGN_RING_DESC,
+	};
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	return 0;
+}
+
 static int
 iavfbe_dev_start(struct rte_eth_dev *dev)
 {
@@ -94,6 +238,8 @@ iavfbe_dev_start(struct rte_eth_dev *dev)
 
 	adapter->adapter_stopped = 0;
 
+	iavfbe_start_queues(dev);
+
 	return 0;
 }
 
@@ -106,6 +252,8 @@ iavfbe_dev_stop(struct rte_eth_dev *dev)
 	if (adapter->adapter_stopped == 1)
 		return 0;
 
+	iavfbe_stop_queues(dev);
+
 	adapter->adapter_stopped = 1;
 
 	return 0;
@@ -133,6 +281,13 @@ iavfbe_dev_link_update(struct rte_eth_dev *dev,
 static int
 iavfbe_dev_close(struct rte_eth_dev *dev)
 {
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Only send event when the emudev is alive */
+	if (adapter->started & adapter->cq_info.arq.len)
+		iavfbe_notify_vf_reset(adapter);
+
 	iavfbe_destroy_adapter(dev);
 	rte_eth_dev_release_port(dev);
 
@@ -236,8 +391,28 @@ iavfbe_destroy_device(struct rte_emudev *dev)
 {
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_eth_dev_data *data = adapter->eth_dev->data;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
 
-	/* TODO: Disable all lan queues */
+	/* Disable all queues */
+	for (i = 0; i < data->nb_rx_queues; i++) {
+		rxq = data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_atomic32_set(&rxq->enable, false);
+		rxq->q_set = false;
+	}
+
+	for (i = 0; i < data->nb_tx_queues; i++) {
+		txq = data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_atomic32_set(&txq->enable, false);
+		txq->q_set = false;
+	}
+	adapter->started = 0;
 
 	/* update link status */
 	adapter->link_up = false;
@@ -249,9 +424,13 @@ iavfbe_update_device(struct rte_emudev *dev)
 {
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_eth_dev_data *data = adapter->eth_dev->data;
 	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
 	struct rte_emudev_q_info q_info;
 	struct rte_emudev_irq_info irq_info;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
 
 	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
 		IAVF_BE_LOG(ERR, "Can not get mem table\n");
@@ -271,10 +450,87 @@ iavfbe_update_device(struct rte_emudev *dev)
 		return -1;
 	}
 
-	/* TODO: Lan queue info update */
 	adapter->cq_irqfd = irq_info.eventfd;
 	rte_atomic32_set(&adapter->irq_enable, irq_info.enable);
 
+	for (i = 0; i < data->nb_rx_queues; i++) {
+		rxq = data->rx_queues[i];
+		if (!rxq || rxq->vector == -1)
+			continue;
+
+		if (rte_emudev_get_irq_info(dev->dev_id,
+			rxq->vector, &irq_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get irq info of rxq %d\n", i);
+			return -1;
+		}
+		rte_atomic32_set(&rxq->irq_enable, irq_info.enable);
+	}
+
+	for (i = 0; i < data->nb_tx_queues; i++) {
+		txq = data->tx_queues[i];
+		if (!txq || txq->vector == -1)
+			continue;
+
+		if (rte_emudev_get_irq_info(dev->dev_id,
+			txq->vector, &irq_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get irq info of txq %d\n", i);
+			return -1;
+		}
+		rte_atomic32_set(&txq->irq_enable, irq_info.enable);
+	}
+
+	return 0;
+}
+
+int
+iavfbe_lock_lanq(struct iavfbe_adapter *adapter)
+{
+	struct rte_eth_dev *eth_dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_spinlock_lock(&rxq->access_lock);
+	}
+
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		txq = eth_dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_spinlock_lock(&txq->access_lock);
+	}
+
+	return 0;
+}
+
+int
+iavfbe_unlock_lanq(struct iavfbe_adapter *adapter)
+{
+	struct rte_eth_dev *eth_dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_spinlock_unlock(&rxq->access_lock);
+	}
+
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		txq = eth_dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_spinlock_unlock(&txq->access_lock);
+	}
+
 	return 0;
 }
 
@@ -287,11 +543,11 @@ iavfbe_lock_dp(struct rte_emudev *dev, int lock)
 	/* Acquire/Release lock of control queue and lan queue */
 
 	if (lock) {
-		/* TODO: Lan queue lock */
+		iavfbe_lock_lanq(adapter);
 		rte_spinlock_lock(&adapter->cq_info.asq.access_lock);
 		rte_spinlock_lock(&adapter->cq_info.arq.access_lock);
 	} else {
-		/* TODO: Lan queue unlock */
+		iavfbe_unlock_lanq(adapter);
 		rte_spinlock_unlock(&adapter->cq_info.asq.access_lock);
 		rte_spinlock_unlock(&adapter->cq_info.arq.access_lock);
 	}
@@ -358,11 +614,16 @@ iavfbe_reset_device(struct rte_emudev *dev)
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
 
+	iavfbe_notify(adapter);
+
 	/* Lock has been acquired by lock_dp */
-	/* TODO: reset all queues */
+	iavfbe_reset_all_queues(adapter);
 	iavfbe_reset_asq(adapter, false);
 	iavfbe_reset_arq(adapter, false);
 
+	memset(adapter->qps, 0, sizeof(struct virtchnl_queue_pair_info));
+	memset(&adapter->eth_stats, 0, sizeof(struct virtchnl_eth_stats));
+	adapter->nb_used_qps = 0;
 	adapter->link_up = 0;
 	adapter->unicast_promisc = true;
 	adapter->multicast_promisc = true;
@@ -433,7 +694,7 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 {
 	struct iavfbe_adapter *adapter;
 	struct rte_iavf_emu_config *conf;
-	int ret;
+	int bufsz, ret;
 
 	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
 
@@ -472,6 +733,48 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 	rte_spinlock_init(&adapter->cq_info.asq.access_lock);
 	rte_spinlock_init(&adapter->cq_info.arq.access_lock);
 
+	/* Set VF Backend defaults during initialization */
+	adapter->virtchnl_version.major = VIRTCHNL_VERSION_MAJOR;
+	adapter->virtchnl_version.minor = VIRTCHNL_VERSION_MINOR;
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(IAVF_BE_DEFAULT_VSI_NUM *
+		 sizeof(struct virtchnl_vsi_resource));
+	adapter->vf_res = rte_zmalloc_socket("iavfbe", bufsz, 0,
+					     eth_dev->device->numa_node);
+	if (!adapter->vf_res) {
+		IAVF_BE_LOG(ERR, "Fail to allocate vf_res memory");
+		ret = -ENOMEM;
+		goto err_res;
+	}
+
+	adapter->vf_res->num_vsis = IAVF_BE_DEFAULT_VSI_NUM;
+	adapter->vf_res->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2 |
+					VIRTCHNL_VF_OFFLOAD_VLAN |
+					VIRTCHNL_VF_OFFLOAD_WB_ON_ITR |
+					VIRTCHNL_VF_OFFLOAD_RX_POLLING;
+	adapter->vf_res->max_vectors = IAVF_BE_MAX_VECTORS;
+	adapter->vf_res->num_queue_pairs = adapter->nb_qps;
+	adapter->vf_res->max_mtu = AVF_DEFAULT_MAX_MTU;
+	/* Make vsi_id change with diffient emu device */
+	adapter->vf_res->vsi_res[0].vsi_id = emu_dev->dev_id;
+	adapter->vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
+	adapter->vf_res->vsi_res[0].num_queue_pairs = adapter->nb_qps;
+	rte_ether_addr_copy(ether_addr,
+		(struct rte_ether_addr *)
+		adapter->vf_res->vsi_res[0].default_mac_addr);
+
+	adapter->qps =
+		rte_zmalloc_socket("iavfbe",
+				   adapter->nb_qps * sizeof(adapter->qps[0]),
+				   0,
+				   eth_dev->device->numa_node);
+	if (!adapter->qps) {
+		IAVF_BE_LOG(ERR, "fail to allocate memeory for queue info");
+		ret = -ENOMEM;
+		goto err_qps;
+	}
+
 	adapter->unicast_promisc = true;
 	adapter->multicast_promisc = true;
 	adapter->vlan_filter = false;
@@ -494,6 +797,11 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 	return 0;
 
 err_thread:
+	rte_free(adapter->qps);
+err_qps:
+	rte_free(adapter->vf_res);
+err_res:
+	rte_free(adapter->cq_info.asq.aq_req);
 err_aq:
 err_info:
 	rte_free(conf);
@@ -513,6 +821,9 @@ iavfbe_destroy_adapter(struct rte_eth_dev *dev)
 	}
 
 	rte_free(adapter->dev_info.dev_priv);
+	rte_free(adapter->cq_info.asq.aq_req);
+	rte_free(adapter->vf_res);
+	rte_free(adapter->qps);
 }
 
 static int
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.c b/drivers/net/iavf_be/iavf_be_rxtx.c
new file mode 100644
index 0000000000..72cbead45a
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_rxtx.c
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include <virtchnl.h>
+#include "iavf_be.h"
+#include "iavf_be_rxtx.h"
+
+int
+iavfbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			  uint16_t nb_desc __rte_unused,
+			  unsigned int socket_id,
+			  const struct rte_eth_rxconf *rx_conf __rte_unused,
+			  struct rte_mempool *mp)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_rx_queue *rxq;
+	uint16_t len;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		iavfbe_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("iavfbe rxq",
+				 sizeof(struct iavfbe_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		IAVF_BE_LOG(ERR, "Failed to allocate memory for "
+				 "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = 0; /* Update when queue from fe is ready */
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_hdr_len = 0;
+	rxq->vector = -1;
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* More ring info will be gotten in virtchnl msg */
+
+	rxq->adapter = (void *)ad;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+}
+
+int
+iavfbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			  uint16_t nb_desc __rte_unused,
+			  unsigned int socket_id,
+			  const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_tx_queue *txq;
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		iavfbe_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("iavfbe txq",
+				 sizeof(struct iavfbe_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		IAVF_BE_LOG(ERR, "Failed to allocate memory for "
+				 "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->vector = -1;
+
+	/* More ring info will be gotten in virtchnl msg */
+
+	txq->adapter = (void *)ad;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+iavfbe_dev_rx_queue_release(void *rxq)
+{
+	struct iavfbe_rx_queue *q = (struct iavfbe_rx_queue *)rxq;
+
+	if (!q)
+		return;
+	rte_free(q);
+}
+
+void
+iavfbe_dev_tx_queue_release(void *txq)
+{
+	struct iavfbe_tx_queue *q = (struct iavfbe_tx_queue *)txq;
+
+	if (!q)
+		return;
+	rte_free(q);
+}
+
+void
+iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct iavfbe_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+	if (!rxq)
+		return;
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = true;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = 0;
+	qinfo->conf.rx_drop_en = false;
+	qinfo->conf.rx_deferred_start = false;
+}
+
+void
+iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct iavfbe_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	if (!txq)
+		return;
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = 0;
+	qinfo->conf.tx_rs_thresh = 0;
+	qinfo->conf.offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	qinfo->conf.tx_deferred_start = false;
+}
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.h b/drivers/net/iavf_be/iavf_be_rxtx.h
new file mode 100644
index 0000000000..e8be3f532d
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_rxtx.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _AVF_BE_RXTX_H_
+#define _AVF_BE_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define IAVF_BE_ALIGN_RING_DESC      32
+#define IAVF_BE_MIN_RING_DESC        64
+#define IAVF_BE_MAX_RING_DESC        4096
+
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+
+#define AVF_RX_MAX_SEG           5
+
+#define iavf_rx_desc iavf_32byte_rx_desc
+
+/* Structure associated with each Rx queue in AVF_BE. */
+struct iavfbe_rx_queue {
+	rte_spinlock_t access_lock;
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	volatile struct iavf_tx_desc *tx_ring; /* AVF Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;   /* AVF Tx ring DMA address */
+	uint16_t nb_rx_desc;          /* ring length */
+	volatile uint8_t *qtx_tail;   /* register address of tail */
+
+	uint16_t tx_head;
+	int vector;
+	int kickfd;
+	rte_atomic32_t irq_enable;
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	bool q_set;             /* If queue has been set by virtchnl */
+	rte_atomic32_t enable;  /* If queue has been enabled set by virtchnl */
+
+	struct iavfbe_adapter *adapter; /* Point to adapter the tx queue belong to */
+	struct {
+		uint64_t recv_pkt_num;
+		uint64_t recv_bytes;
+		uint64_t recv_miss_num;
+		uint64_t recv_multi_num;
+		uint64_t recv_broad_num;
+	} stats, stats_off;   /* Stats information */
+};
+
+/* Structure associated with each TX queue. */
+struct iavfbe_tx_queue {
+	rte_spinlock_t access_lock;
+	volatile union iavf_rx_desc *rx_ring; /* AVF Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;    /* Rx ring DMA address */
+	uint16_t nb_tx_desc;           /* ring length */
+	volatile uint8_t *qrx_tail;    /* tail address of fe's rx ring */
+	uint32_t buffer_size;          /* max buffer size of fe's rx ring */
+	uint32_t max_pkt_size;         /* max buffer size of fe's rx ring */
+
+	uint16_t rx_head;
+	int vector;
+	int callfd;
+	rte_atomic32_t irq_enable;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+
+	bool q_set;             /* If queue has been set by virtchnl */
+	rte_atomic32_t enable;  /* If queue has been enabled set by virtchnl */
+
+	struct iavfbe_adapter *adapter; /* Point to adapter the tx queue belong to */
+	struct {
+		uint64_t sent_pkt_num;
+		uint64_t sent_bytes;
+		uint64_t sent_miss_num;
+		uint64_t sent_multi_num;
+		uint64_t sent_broad_num;
+	} stats, stats_off;   /* Stats information */
+};
+
+
+int iavfbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			      uint16_t queue_idx,
+			      uint16_t nb_desc,
+			      unsigned int socket_id,
+			      const struct rte_eth_rxconf *rx_conf,
+			      struct rte_mempool *mp);
+void iavfbe_dev_rx_queue_release(void *rxq);
+int iavfbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			      uint16_t queue_idx,
+			      uint16_t nb_desc,
+			      unsigned int socket_id,
+			      const struct rte_eth_txconf *tx_conf);
+void iavfbe_dev_tx_queue_release(void *txq);
+void iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			     struct rte_eth_rxq_info *qinfo);
+void iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			     struct rte_eth_txq_info *qinfo);
+
+#endif /* _AVF_BE_RXTX_H_ */
diff --git a/drivers/net/iavf_be/iavf_be_vchnl.c b/drivers/net/iavf_be/iavf_be_vchnl.c
index 646c967252..a1165cd5b5 100644
--- a/drivers/net/iavf_be/iavf_be_vchnl.c
+++ b/drivers/net/iavf_be/iavf_be_vchnl.c
@@ -21,8 +21,94 @@
 #include <virtchnl.h>
 
 #include "iavf_be.h"
+#include "iavf_be_rxtx.h"
 
-__rte_unused  static int
+static inline void
+reset_rxq_stats(struct iavfbe_rx_queue *rxq)
+{
+	rxq->stats.recv_pkt_num = 0;
+	rxq->stats.recv_bytes = 0;
+	rxq->stats.recv_miss_num = 0;
+	rxq->stats.recv_multi_num = 0;
+	rxq->stats.recv_broad_num = 0;
+
+	rxq->stats_off.recv_pkt_num = 0;
+	rxq->stats_off.recv_bytes = 0;
+	rxq->stats_off.recv_miss_num = 0;
+	rxq->stats_off.recv_multi_num = 0;
+	rxq->stats_off.recv_broad_num = 0;
+}
+
+static inline void
+reset_txq_stats(struct iavfbe_tx_queue *txq)
+{
+	txq->stats.sent_pkt_num = 0;
+	txq->stats.sent_bytes = 0;
+	txq->stats.sent_miss_num = 0;
+	txq->stats.sent_multi_num = 0;
+	txq->stats.sent_broad_num = 0;
+}
+
+void
+iavfbe_reset_all_queues(struct iavfbe_adapter *adapter)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	/* Disable queues and mark them unset */
+	for (i = 0; i < adapter->eth_dev->data->nb_rx_queues; i++) {
+		rxq = adapter->eth_dev->data->rx_queues[i];
+		if (rxq) {
+			rte_atomic32_set(&rxq->enable, false);
+			rxq->q_set = false;
+			rxq->tx_head = 0;
+			reset_rxq_stats(rxq);
+		}
+	}
+
+	for (i = 0; i < adapter->eth_dev->data->nb_tx_queues; i++) {
+		txq = adapter->eth_dev->data->tx_queues[i];
+		if (txq) {
+			rte_atomic32_set(&txq->enable, false);
+			txq->q_set = false;
+			txq->rx_head = 0;
+			reset_txq_stats(txq);
+		}
+	}
+}
+
+static enum iavf_status
+apply_tx_irq(struct iavfbe_tx_queue *txq, uint16_t vector)
+{
+	struct rte_emudev_irq_info info;
+
+	txq->vector = vector;
+	if (rte_emudev_get_irq_info(txq->adapter->edev_id, vector, &info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info\n");
+		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+	txq->callfd = info.eventfd;
+
+	return 0;
+}
+
+static enum iavf_status
+apply_rx_irq(struct iavfbe_rx_queue *rxq, uint16_t vector)
+{
+	struct rte_emudev_irq_info info;
+
+	rxq->vector = vector;
+	if (rte_emudev_get_irq_info(rxq->adapter->edev_id, vector, &info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info\n");
+		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+	rxq->kickfd = info.eventfd;
+
+	return 0;
+}
+
+static int
 iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
 			uint32_t opcode,
 			uint32_t retval,
@@ -93,6 +179,431 @@ iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
 	return status;
 }
 
+static void
+iavfbe_process_cmd_version(struct iavfbe_adapter *adapter,
+				uint8_t *msg)
+{
+	struct virtchnl_version_info *info =
+		(struct virtchnl_version_info *)msg;
+
+	/* Only support V1.1 */
+	if (adapter->virtchnl_version.major == info->major &&
+	    adapter->virtchnl_version.minor == info->minor)
+		iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_VERSION,
+				      VIRTCHNL_STATUS_SUCCESS,
+				      (uint8_t *)&adapter->virtchnl_version,
+				      sizeof(adapter->virtchnl_version));
+	else
+		iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_VERSION,
+				      VIRTCHNL_STATUS_NOT_SUPPORTED,
+				      NULL, 0);
+}
+
+static int
+iavfbe_renew_device_info(struct iavfbe_adapter *adapter)
+{
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	uint64_t addr;
+
+	if (rte_emudev_get_mem_table(adapter->edev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	if (rte_emudev_get_attr(adapter->edev_id, RTE_IAVF_EMU_ATTR_RESET,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->reset = (uint8_t *)(uintptr_t)addr;
+
+	IAVF_BE_LOG(DEBUG, "DEVICE memtable re-acquired, %p\n",
+		    adapter->mem_table);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_reset_vf(struct iavfbe_adapter *adapter)
+{
+	adapter->started = 0;
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_IN_PROGRESS);
+
+	iavfbe_lock_lanq(adapter);
+	iavfbe_reset_all_queues(adapter);
+	iavfbe_unlock_lanq(adapter);
+
+	memset(adapter->qps, 0, sizeof(struct virtchnl_queue_pair_info));
+	memset(&adapter->eth_stats, 0, sizeof(struct virtchnl_eth_stats));
+	adapter->nb_used_qps = 0;
+	adapter->link_up = 0;
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+	adapter->adapter_stopped = 1;
+
+	iavfbe_renew_device_info(adapter);
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_COMPLETED);
+	adapter->started = 1;
+
+	return IAVF_SUCCESS;
+}
+
+static int
+iavfbe_process_cmd_get_vf_resource(struct iavfbe_adapter *adapter,
+				uint8_t *msg)
+{
+	struct virtchnl_vf_resource vf_res;
+	uint32_t request_caps;
+	uint32_t len = 0;
+
+	len = sizeof(struct virtchnl_vf_resource) +
+		(adapter->vf_res->num_vsis - 1) *
+		sizeof(struct virtchnl_vsi_resource);
+
+	request_caps = *(uint32_t *)msg;
+
+	rte_memcpy(&vf_res, adapter->vf_res, len);
+	vf_res.vf_cap_flags = request_caps &
+				adapter->vf_res->vf_cap_flags;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_GET_VF_RESOURCES,
+			      VIRTCHNL_STATUS_SUCCESS, (uint8_t *)&vf_res, len);
+
+	return IAVF_SUCCESS;
+}
+
+static int
+iavfbe_process_cmd_config_vsi_queues(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	struct virtchnl_vsi_queue_config_info *vc_vqci =
+		(struct virtchnl_vsi_queue_config_info *)msg;
+	struct virtchnl_queue_pair_info *vc_qpi;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t nb_qps, queue_id;
+	int i, ret = VIRTCHNL_STATUS_SUCCESS;
+
+	/* Check valid */
+	if (!msg || vc_vqci->num_queue_pairs > adapter->nb_qps) {
+		IAVF_BE_LOG(ERR, "number of queue pairs (%u) exceeds max (%u)",
+			    vc_vqci->num_queue_pairs, adapter->nb_qps);
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	nb_qps = vc_vqci->num_queue_pairs;
+	vc_qpi = vc_vqci->qpair;
+
+	for (i = 0; i < nb_qps; i++) {
+		if (vc_qpi[i].txq.vsi_id != vc_vqci->vsi_id ||
+		    vc_qpi[i].rxq.vsi_id != vc_vqci->vsi_id ||
+		    vc_qpi[i].rxq.queue_id != vc_qpi[i].txq.queue_id ||
+		    vc_qpi[i].rxq.queue_id > adapter->nb_qps - 1 ||
+		    vc_qpi[i].rxq.ring_len > IAVF_BE_MAX_RING_DESC ||
+		    vc_qpi[i].txq.ring_len > IAVF_BE_MAX_RING_DESC ||
+		    vc_vqci->vsi_id != adapter->vf_res->vsi_res[0].vsi_id) {
+			ret = VIRTCHNL_STATUS_ERR_PARAM;
+			goto send_msg;
+		}
+	}
+
+	/* Store queues info internally */
+	adapter->nb_used_qps = nb_qps;
+	rte_memcpy(adapter->qps, &vc_vqci->qpair,
+		   nb_qps * sizeof(adapter->qps[0]));
+
+	for (i = 0; i < nb_qps; i++) {
+		struct rte_emudev_db_info db_info;
+
+		queue_id = adapter->qps[i].rxq.queue_id;
+		rxq = dev->data->rx_queues[queue_id];
+		txq = dev->data->tx_queues[queue_id];
+		if (!rxq || !txq) {
+			IAVF_BE_LOG(ERR, "Queue Pair %u "
+				    " hasn't been setup", rxq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+
+		/* Configure Rx Queue */
+		rxq->nb_rx_desc = vc_qpi[i].txq.ring_len;
+		rxq->tx_ring_phys_addr = vc_qpi[i].txq.dma_ring_addr;
+		rxq->max_pkt_len = vc_qpi[i].rxq.max_pkt_size;
+		memset(&db_info, 0, sizeof(db_info));
+		ret = rte_emudev_get_db_info(adapter->edev_id,
+					  i * 2 + RTE_IAVF_EMU_ADMINQ_NUM,
+					  &db_info);
+		if (ret || (db_info.flag & RTE_EMUDEV_DB_MEM) != RTE_EMUDEV_DB_MEM) {
+			IAVF_BE_LOG(ERR, "Fail to get Door Bell of RXQ %u",
+				    rxq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+		rxq->qtx_tail = (uint8_t *)db_info.data.mem.base;
+		/* Reset stats */
+		reset_rxq_stats(rxq);
+		rxq->q_set = true;
+
+		/* Configure Tx Queue */
+		txq->nb_tx_desc = vc_qpi[i].rxq.ring_len;
+		txq->rx_ring_phys_addr = vc_qpi[i].rxq.dma_ring_addr;
+		txq->buffer_size = vc_qpi[i].rxq.databuffer_size;
+		txq->max_pkt_size = vc_qpi[i].rxq.max_pkt_size;
+		memset(&db_info, 0, sizeof(db_info));
+		ret = rte_emudev_get_db_info(adapter->edev_id,
+					  i * 2 + RTE_IAVF_EMU_ADMINQ_NUM + 1,
+					  &db_info);
+		if (ret || (db_info.flag & RTE_EMUDEV_DB_MEM) != RTE_EMUDEV_DB_MEM) {
+			IAVF_BE_LOG(ERR, "Fail to get Door Bell of TXQ %u",
+				    txq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+		txq->qrx_tail = (uint8_t *)db_info.data.mem.base;
+		/* Reset stats */
+		reset_txq_stats(txq);
+		txq->q_set = true;
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_VSI_QUEUES,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_enable_queues(struct iavfbe_adapter *adapter,
+				 uint8_t *msg,
+				 uint16_t msglen __rte_unused)
+{
+	struct virtchnl_queue_select *q_sel =
+		(struct virtchnl_queue_select *)msg;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i, ret = VIRTCHNL_STATUS_SUCCESS;
+
+	if (!msg) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < adapter->nb_used_qps; i++) {
+		uint64_t len;
+
+		rxq = dev->data->rx_queues[i];
+		txq = dev->data->tx_queues[i];
+		if (!rxq || !txq) {
+			IAVF_BE_LOG(ERR, "Queue Pair %u "
+				    " hasn't been setup", rxq->queue_id);
+			ret = IAVF_ERR_DEVICE_NOT_SUPPORTED;
+			goto send_msg;
+		}
+		if (q_sel->tx_queues & (1 << i)) {
+			if (!rxq->q_set) {
+				IAVF_BE_LOG(ERR, "RXQ %u hasn't been setup", i);
+				ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+				goto send_msg;
+			}
+			len = rxq->nb_rx_desc * sizeof(struct iavf_tx_desc);
+			rxq->tx_ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+						adapter->mem_table,
+						rxq->tx_ring_phys_addr,
+						&len);
+			rte_atomic32_set(&rxq->enable, true);
+		}
+		if (q_sel->rx_queues & (1 << i)) {
+			if (!txq->q_set) {
+				IAVF_BE_LOG(ERR, "TXQ %u hasn't been setup", i);
+				ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+				goto send_msg;
+			}
+			len = txq->nb_tx_desc * sizeof(union iavf_32byte_rx_desc);
+			txq->rx_ring = (void *)(uintptr_t)
+				rte_iavf_emu_get_dma_vaddr(adapter->mem_table,
+						       txq->rx_ring_phys_addr,
+						       &len);
+			rte_atomic32_set(&txq->enable, true);
+		}
+	}
+
+	/* Set link UP after queues are enabled */
+	adapter->link_up = true;
+	iavfbe_dev_link_update(adapter->eth_dev, 0);
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ENABLE_QUEUES, ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_disable_queues(struct iavfbe_adapter *adapter,
+				  uint8_t *msg,
+				  uint16_t msglen __rte_unused)
+{
+	struct virtchnl_queue_select *q_sel =
+		(struct virtchnl_queue_select *)msg;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	uint16_t i;
+
+	if (!msg) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < adapter->nb_used_qps; i++) {
+		rxq = dev->data->rx_queues[i];
+		txq = dev->data->tx_queues[i];
+
+		if (q_sel->tx_queues & (1 << i)) {
+			if (!rxq)
+				continue;
+			rte_atomic32_set(&rxq->enable, false);
+			reset_rxq_stats(rxq);
+		}
+		if (q_sel->rx_queues & (1 << i)) {
+			if (!txq)
+				continue;
+			rte_atomic32_set(&txq->enable, false);
+			reset_txq_stats(txq);
+		}
+	}
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DISABLE_QUEUES,
+			      ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_config_irq_map(struct iavfbe_adapter *adapter,
+				  uint8_t *msg,
+				  uint16_t msglen __rte_unused)
+{
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_tx_queue *txq;
+	struct iavfbe_rx_queue *rxq;
+	uint16_t i, j, vector_id;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+
+	struct virtchnl_irq_map_info *irqmap =
+		(struct virtchnl_irq_map_info *)msg;
+	struct virtchnl_vector_map *map;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	IAVF_BE_LOG(DEBUG, "irqmap->num_vectors = %d\n", irqmap->num_vectors);
+
+	for (i = 0; i < irqmap->num_vectors; i++) {
+		map = &irqmap->vecmap[i];
+		vector_id = map->vector_id;
+
+		for (j = 0; j < adapter->nb_used_qps; j++) {
+			rxq = dev->data->rx_queues[j];
+			txq = dev->data->tx_queues[j];
+
+			if ((1 << j) & map->rxq_map) {
+				txq->vector = vector_id;
+				ret = apply_tx_irq(txq, vector_id);
+				if (ret)
+					goto send_msg;
+			}
+			if ((1 << j) & map->txq_map) {
+				rxq->vector = vector_id;
+				ret = apply_rx_irq(rxq, vector_id);
+				if (ret)
+					goto send_msg;
+			}
+		}
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_IRQ_MAP,
+			      ret, NULL, 0);
+
+	return ret;
+}
+
+
+static int
+iavfbe_process_cmd_get_stats(struct iavfbe_adapter *adapter,
+				uint8_t *msg __rte_unused,
+				uint16_t msglen __rte_unused)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i;
+
+	memset(&adapter->eth_stats, 0, sizeof(adapter->eth_stats));
+
+	for (i = 0; i < adapter->eth_dev->data->nb_rx_queues; i++) {
+		rxq = adapter->eth_dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		adapter->eth_stats.tx_broadcast += rxq->stats.recv_broad_num;;
+		adapter->eth_stats.tx_bytes += rxq->stats.recv_bytes;
+		adapter->eth_stats.tx_discards += rxq->stats.recv_miss_num;
+		adapter->eth_stats.tx_multicast += rxq->stats.recv_multi_num;
+		adapter->eth_stats.tx_unicast += rxq->stats.recv_pkt_num -
+						rxq->stats.recv_broad_num -
+						rxq->stats.recv_multi_num;
+	}
+
+	for (i = 0; i < adapter->eth_dev->data->nb_tx_queues; i++) {
+		txq = adapter->eth_dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		adapter->eth_stats.rx_broadcast += txq->stats.sent_broad_num;
+		adapter->eth_stats.rx_bytes += txq->stats.sent_bytes;
+		/* Dont add discards as recv count doesn't include this part */
+		adapter->eth_stats.rx_multicast += txq->stats.sent_multi_num;
+		adapter->eth_stats.rx_unicast += txq->stats.sent_pkt_num -
+						txq->stats.sent_broad_num -
+						txq->stats.sent_multi_num;
+	}
+
+	IAVF_BE_LOG(DEBUG, "rx_bytes:            %"PRIu64"",
+					adapter->eth_stats.tx_bytes);
+	IAVF_BE_LOG(DEBUG, "rx_unicast:          %"PRIu64"",
+					adapter->eth_stats.tx_unicast);
+	IAVF_BE_LOG(DEBUG, "rx_multicast:        %"PRIu64"",
+					adapter->eth_stats.tx_multicast);
+	IAVF_BE_LOG(DEBUG, "rx_broadcast:        %"PRIu64"",
+					adapter->eth_stats.tx_broadcast);
+	IAVF_BE_LOG(DEBUG, "rx_discards:         %"PRIu64"",
+					adapter->eth_stats.tx_discards);
+
+	IAVF_BE_LOG(DEBUG, "tx_bytes:            %"PRIu64"",
+					adapter->eth_stats.rx_bytes);
+	IAVF_BE_LOG(DEBUG, "tx_unicast:          %"PRIu64"",
+					adapter->eth_stats.rx_unicast);
+	IAVF_BE_LOG(DEBUG, "tx_multicast:        %"PRIu64"",
+					adapter->eth_stats.rx_multicast);
+	IAVF_BE_LOG(DEBUG, "tx_broadcast:        %"PRIu64"",
+					adapter->eth_stats.rx_broadcast);
+	IAVF_BE_LOG(DEBUG, "tx_discards:         %"PRIu64"",
+					adapter->eth_stats.rx_discards);
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_GET_STATS,
+			      VIRTCHNL_STATUS_SUCCESS,
+			      (uint8_t *)&adapter->eth_stats,
+			      sizeof(struct virtchnl_eth_stats));
+
+	return IAVF_SUCCESS;
+}
+
 /* Read data in admin queue to get msg from vf driver */
 static enum iavf_status
 iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
@@ -166,6 +677,306 @@ iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
 	return ret;
 }
 
+static void
+iavfbe_notify_vf_link_status(struct iavfbe_adapter *adapter)
+{
+	struct virtchnl_pf_event event;
+
+	event.severity = PF_EVENT_SEVERITY_INFO;
+	event.event = VIRTCHNL_EVENT_LINK_CHANGE;
+	event.event_data.link_event.link_status = adapter->link_up ? 1 : 0;
+	event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_UNKNOWN;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_EVENT,
+				IAVF_SUCCESS, (uint8_t *)&event, sizeof(event));
+}
+
+void
+iavfbe_notify_vf_reset(struct iavfbe_adapter *adapter)
+{
+	struct virtchnl_pf_event event;
+
+	event.severity = PF_EVENT_SEVERITY_CERTAIN_DOOM;
+	event.event = VIRTCHNL_EVENT_RESET_IMPENDING;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_EVENT,
+				IAVF_SUCCESS, (uint8_t *)&event, sizeof(event));
+}
+
+void
+iavfbe_notify(struct iavfbe_adapter *adapter)
+{
+	if (adapter->cq_irqfd == -1 ||
+		!rte_atomic32_read(&adapter->irq_enable))
+		return;
+
+	if (eventfd_write(adapter->cq_irqfd, (eventfd_t)1) < 0)
+		IAVF_BE_LOG(ERR, "failed to notify front-end: %s",
+					strerror(errno));
+}
+
+
+static int
+iavfbe_process_cmd_enable_vlan_strip(struct iavfbe_adapter *adapter)
+{
+	adapter->vlan_strip = true;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
+			      VIRTCHNL_STATUS_SUCCESS, NULL, 0);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_disable_vlan_strip(struct iavfbe_adapter *adapter)
+{
+	adapter->vlan_strip = false;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
+			      VIRTCHNL_STATUS_SUCCESS, NULL, 0);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_config_promisc_mode(struct iavfbe_adapter *adapter,
+				uint8_t *msg,
+				uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_promisc_info *promisc =
+		(struct virtchnl_promisc_info *)msg;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	adapter->unicast_promisc =
+		(promisc->flags & FLAG_VF_UNICAST_PROMISC) ? true : false;
+	adapter->multicast_promisc =
+		(promisc->flags & FLAG_VF_MULTICAST_PROMISC) ? true : false;
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_add_ether_address(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	struct virtchnl_ether_addr_list *addr_list =
+		(struct virtchnl_ether_addr_list *)msg;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+
+	for (i = 0; i < addr_list->num_elements; i++) {
+
+		/* TODO: mac filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ADD_ETH_ADDR,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_del_ether_address(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_ether_addr_list *addr_list =
+		(struct virtchnl_ether_addr_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < addr_list->num_elements; i++) {
+
+		/* TODO: mac filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DEL_ETH_ADDR,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_add_vlan(struct iavfbe_adapter *adapter,
+			    uint8_t *msg, uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_vlan_filter_list *vlan_list =
+		(struct virtchnl_vlan_filter_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < vlan_list->num_elements; i++) {
+
+		/* TODO: vlan filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ADD_VLAN,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_del_vlan(struct iavfbe_adapter *adapter,
+			    uint8_t *msg,
+			    uint16_t msglen __rte_unused)
+{
+	int ret = IAVF_SUCCESS;
+	struct virtchnl_vlan_filter_list *vlan_list =
+		(struct virtchnl_vlan_filter_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < vlan_list->num_elements; i++) {
+
+		/* TODO: vlan filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DEL_VLAN,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static void
+iavfbe_execute_vf_cmd(struct iavfbe_adapter *adapter,
+			struct iavf_arq_event_info *event)
+{
+	enum virtchnl_ops msg_opc;
+	int ret;
+
+	msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+		event->desc.cookie_high);
+	/* perform basic checks on the msg */
+	ret = virtchnl_vc_validate_vf_msg(&adapter->virtchnl_version, msg_opc,
+					  event->msg_buf, event->msg_len);
+	if (ret) {
+		IAVF_BE_LOG(ERR, "Invalid message opcode %u, len %u",
+			    msg_opc, event->msg_len);
+		iavfbe_send_msg_to_vf(adapter, msg_opc,
+				      VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
+				      NULL, 0);
+	}
+
+	switch (msg_opc) {
+	case VIRTCHNL_OP_VERSION:
+		IAVF_BE_LOG(INFO, "OP_VERSION received");
+		iavfbe_process_cmd_version(adapter, event->msg_buf);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		IAVF_BE_LOG(INFO, "OP_RESET_VF received");
+		iavfbe_process_cmd_reset_vf(adapter);
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		IAVF_BE_LOG(INFO, "OP_GET_VF_RESOURCES received");
+		iavfbe_process_cmd_get_vf_resource(adapter, event->msg_buf);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
+		iavfbe_process_cmd_config_vsi_queues(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_ENABLE_QUEUES received");
+		iavfbe_process_cmd_enable_queues(adapter, event->msg_buf,
+						 event->msg_len);
+		iavfbe_notify_vf_link_status(adapter);
+		break;
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_DISABLE_QUEUE received");
+		iavfbe_process_cmd_disable_queues(adapter, event->msg_buf,
+						  event->msg_len);
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		IAVF_BE_LOG(INFO, "OP_CONFIG_PROMISCUOUS_MODE received");
+		iavfbe_process_cmd_config_promisc_mode(adapter, event->msg_buf,
+						       event->msg_len);
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_CONFIG_IRQ_MAP received");
+		iavfbe_process_cmd_config_irq_map(adapter, event->msg_buf,
+						  event->msg_len);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_ETH_ADDR received");
+		iavfbe_process_cmd_add_ether_address(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_DEL_ETH_ADDR received");
+		iavfbe_process_cmd_del_ether_address(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_GET_STATS received");
+		iavfbe_process_cmd_get_stats(adapter, event->msg_buf,
+					     event->msg_len);
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_VLAN received");
+		iavfbe_process_cmd_add_vlan(adapter, event->msg_buf,
+					    event->msg_len);
+		iavfbe_notify(adapter);
+		break;
+	case VIRTCHNL_OP_DEL_VLAN:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_VLAN received");
+		iavfbe_process_cmd_del_vlan(adapter, event->msg_buf,
+					    event->msg_len);
+		iavfbe_notify(adapter);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING received");
+		iavfbe_process_cmd_enable_vlan_strip(adapter);
+		iavfbe_notify(adapter);
+		break;
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING received");
+		iavfbe_process_cmd_disable_vlan_strip(adapter);
+		iavfbe_notify(adapter);
+		break;
+	default:
+		IAVF_BE_LOG(ERR, "%u received, not supported", msg_opc);
+		iavfbe_send_msg_to_vf(adapter, msg_opc,
+				      VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
+				      NULL, 0);
+		break;
+	}
+
+}
+
 static inline int
 iavfbe_control_queue_remap(struct iavfbe_adapter *adapter,
 			  struct iavfbe_control_q *asq,
@@ -255,7 +1066,7 @@ iavfbe_handle_virtchnl_msg(void *arg)
 
 		switch (aq_opc) {
 		case iavf_aqc_opc_send_msg_to_pf:
-			/* Process msg from VF BE*/
+			iavfbe_execute_vf_cmd(adapter, &info);
 			break;
 		case iavf_aqc_opc_queue_shutdown:
 			iavfbe_reset_arq(adapter, true);
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
index be13a2e492..e6b1c522a7 100644
--- a/drivers/net/iavf_be/meson.build
+++ b/drivers/net/iavf_be/meson.build
@@ -10,4 +10,5 @@ deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
 sources = files(
 	'iavf_be_ethdev.c',
 	'iavf_be_vchnl.c',
+	'iavf_be_rxtx.c',
 )
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v1 4/5] net/iavf_be: add Rx Tx burst support
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
                   ` (2 preceding siblings ...)
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 3/5] net/iavf_be: virtchnl messages process Jingjing Wu
@ 2020-12-19  7:54 ` Jingjing Wu
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 5/5] doc: new net PMD iavf_be Jingjing Wu
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu, Miao Li

Enable packets revcieve and transmit functions.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
Signed-off-by: Miao Li <miao.li@intel.com>
---
 drivers/net/iavf_be/iavf_be_ethdev.c |   3 +
 drivers/net/iavf_be/iavf_be_rxtx.c   | 329 +++++++++++++++++++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.h   |  60 +++++
 3 files changed, 392 insertions(+)

diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index e809f52312..c259c7807e 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -862,6 +862,9 @@ eth_dev_iavfbe_create(struct rte_vdev_device *dev,
 	rte_ether_addr_copy(addr, &eth_dev->data->mac_addrs[0]);
 
 	eth_dev->dev_ops = &iavfbe_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &iavfbe_recv_pkts;
+	eth_dev->tx_pkt_burst = &iavfbe_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &iavfbe_prep_pkts;
 
 	eth_dev->data->dev_link = iavfbe_link;
 	eth_dev->data->numa_node = dev->device.numa_node;
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.c b/drivers/net/iavf_be/iavf_be_rxtx.c
index 72cbead45a..d78f0f23eb 100644
--- a/drivers/net/iavf_be/iavf_be_rxtx.c
+++ b/drivers/net/iavf_be/iavf_be_rxtx.c
@@ -160,3 +160,332 @@ iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 	qinfo->conf.offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
 	qinfo->conf.tx_deferred_start = false;
 }
+
+static inline void
+iavfbe_recv_offload(struct rte_mbuf *m,
+	uint16_t cmd, uint32_t offset)
+{
+	m->l2_len = offset & IAVF_TXD_QW1_MACLEN_MASK >>
+		IAVF_TX_DESC_LENGTH_MACLEN_SHIFT << 1;
+	m->l3_len = offset & IAVF_TXD_QW1_IPLEN_MASK >>
+		IAVF_TX_DESC_LENGTH_IPLEN_SHIFT << 2;
+	m->l4_len = offset & IAVF_TXD_QW1_L4LEN_MASK >>
+		IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT << 2;
+
+	switch (cmd & IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM) {
+	case IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM:
+		m->ol_flags = PKT_TX_IP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_IIPT_IPV4:
+		m->ol_flags = PKT_TX_IPV4;
+		break;
+	case IAVF_TX_DESC_CMD_IIPT_IPV6:
+		m->ol_flags = PKT_TX_IPV6;
+		break;
+	default:
+		break;
+	}
+
+	switch (cmd & IAVF_TX_DESC_CMD_L4T_EOFT_UDP) {
+	case IAVF_TX_DESC_CMD_L4T_EOFT_UDP:
+		m->ol_flags |= PKT_TX_UDP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_L4T_EOFT_SCTP:
+		m->ol_flags |= PKT_TX_SCTP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_L4T_EOFT_TCP:
+		m->ol_flags |= PKT_TX_TCP_CKSUM;
+		break;
+	default:
+		break;
+	}
+}
+
+/* RX function */
+uint16_t
+iavfbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct iavfbe_rx_queue *rxq = (struct iavfbe_rx_queue *)rx_queue;
+	struct iavfbe_adapter *adapter = (struct iavfbe_adapter *)rxq->adapter;
+	uint32_t nb_rx = 0;
+	uint16_t head, tail;
+	uint16_t cmd;
+	uint32_t offset;
+	volatile struct iavf_tx_desc *ring_dma;
+	struct rte_ether_addr *ea = NULL;
+	uint64_t ol_flags, tso_segsz = 0;
+
+	if (unlikely(rte_atomic32_read(&rxq->enable) == 0)) {
+		/* RX queue is not enable currently */
+		return 0;
+	}
+
+	ring_dma = rxq->tx_ring;
+	head = rxq->tx_head;
+	tail = (uint16_t)IAVFBE_READ_32(rxq->qtx_tail);
+
+	while (head != tail && nb_rx < nb_pkts) {
+		volatile struct iavf_tx_desc *d;
+		void *desc_addr;
+		uint64_t data_len, tmp;
+		struct rte_mbuf *cur, *rxm, *first = NULL;
+
+		ol_flags = 0;
+		while (1) {
+			d = &ring_dma[head];
+			head++;
+
+			if (unlikely(head == rxq->nb_rx_desc))
+				head = 0;
+
+			if ((head & 0x3) == 0) {
+				rte_prefetch0(&ring_dma[head]);
+			}
+
+			if ((d->cmd_type_offset_bsz &
+			     IAVF_TXD_QW1_DTYPE_MASK) ==
+			    IAVF_TX_DESC_DTYPE_CONTEXT) {
+				ol_flags = PKT_TX_TCP_SEG;
+				tso_segsz = (d->cmd_type_offset_bsz &
+					     IAVF_TXD_CTX_QW1_MSS_MASK) >>
+					    IAVF_TXD_CTX_QW1_MSS_SHIFT;
+				d = &ring_dma[head];
+				head++;
+			}
+
+			cmd = (d->cmd_type_offset_bsz &IAVF_TXD_QW1_CMD_MASK) >>
+				IAVF_TXD_QW1_CMD_SHIFT;
+			offset = (d->cmd_type_offset_bsz & IAVF_TXD_QW1_OFFSET_MASK) >>
+				IAVF_TXD_QW1_OFFSET_SHIFT;
+
+			rxm = rte_pktmbuf_alloc(rxq->mp);
+			if (unlikely(rxm == NULL)) {
+				IAVF_BE_LOG(ERR, "[%s] failed to allocate mbuf\n", __func__);
+				break;
+			}
+
+			data_len = (rte_le_to_cpu_64(d->cmd_type_offset_bsz)
+						& IAVF_TXD_QW1_TX_BUF_SZ_MASK)
+				>> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT;
+			if (data_len > rte_pktmbuf_tailroom(rxm)) {
+				rte_pktmbuf_free(rxm);
+				rte_pktmbuf_free(first);
+				return nb_rx;
+			}
+			tmp = data_len;
+			desc_addr = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+				adapter->mem_table, d->buffer_addr, &tmp);
+
+			rte_prefetch0(desc_addr);
+			rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+
+			rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+			rte_memcpy(rte_pktmbuf_mtod(rxm, void *), desc_addr, data_len);
+
+			rxm->nb_segs = 1;
+			rxm->next = NULL;
+			rxm->pkt_len = data_len;
+			rxm->data_len = data_len;
+
+			if (cmd & IAVF_TX_DESC_CMD_IL2TAG1)
+				rxm->vlan_tci = (d->cmd_type_offset_bsz &
+						 IAVF_TXD_QW1_L2TAG1_MASK) >>
+						IAVF_TXD_QW1_TX_BUF_SZ_SHIFT;
+
+			if (cmd & IAVF_TX_DESC_CMD_RS)
+				d->cmd_type_offset_bsz =
+					rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+
+			if (!first) {
+				first = rxm;
+				cur = rxm;
+				iavfbe_recv_offload(rxm, cmd, offset);
+				/* TSO enabled */
+				if (ol_flags & PKT_TX_TCP_SEG) {
+					rxm->tso_segsz = tso_segsz;
+					rxm->ol_flags |= ol_flags;
+				}
+			} else {
+				first->pkt_len += (uint32_t)data_len;
+				first->nb_segs++;
+				cur->next = rxm;
+				cur = rxm;
+			}
+
+			if (cmd & IAVF_TX_DESC_CMD_EOP)
+				break;
+		}
+
+		if ((!(ol_flags & PKT_TX_TCP_SEG)) &&
+		    (first->pkt_len > rxq->max_pkt_len)) {
+			rte_pktmbuf_free(first);
+			return nb_rx;
+		}
+
+		rx_pkts[nb_rx] = first;
+		nb_rx++;
+
+		/* Count multicast and broadcast */
+		ea = rte_pktmbuf_mtod(first, struct rte_ether_addr *);
+		if (rte_is_multicast_ether_addr(ea)) {
+			if (rte_is_broadcast_ether_addr(ea))
+				rxq->stats.recv_broad_num++;
+			else
+				rxq->stats.recv_multi_num++;
+		}
+
+		rxq->stats.recv_pkt_num++;
+		rxq->stats.recv_bytes += first->pkt_len;
+	}
+
+	rxq->tx_head = head;
+	return nb_rx;
+}
+
+/* TX function */
+uint16_t
+iavfbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct iavfbe_tx_queue *txq = (struct iavfbe_tx_queue *)tx_queue;
+	struct iavfbe_adapter *adapter = (struct iavfbe_adapter *)txq->adapter;
+	volatile union iavf_rx_desc *ring_dma;
+	volatile union iavf_rx_desc *d;
+	struct rte_ether_addr *ea = NULL;
+	struct rte_mbuf *pkt, *m;
+	uint16_t head, tail;
+	uint16_t nb_tx, nb_avail; /* number of avail desc */
+	void *desc_addr;
+	uint64_t  len, data_len;
+	uint32_t pkt_len;
+	uint64_t qword1;
+
+	if (unlikely(rte_atomic32_read(&txq->enable) == 0)) {
+		/* TX queue is not enable currently */
+		return 0;
+	}
+
+	nb_tx = 0;
+	len = 1;
+	head = txq->rx_head;
+	ring_dma = txq->rx_ring;
+	tail = (uint16_t)IAVFBE_READ_32(txq->qrx_tail);
+	nb_avail = (tail >= head) ?
+		(tail - head) : (txq->nb_tx_desc - tail + head);
+
+	while (nb_avail > 0 && nb_tx < nb_pkts) {
+		pkt = tx_pkts[nb_tx];
+		pkt_len = rte_pktmbuf_pkt_len(pkt);
+
+		if (pkt->nb_segs > nb_avail) /* no desc to use */
+			goto end_of_xmit;
+
+		m = pkt;
+
+		do {
+			qword1 = 0;
+			d = &ring_dma[head];
+			data_len = rte_pktmbuf_data_len(m);
+			desc_addr = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+				adapter->mem_table,
+				rte_le_to_cpu_64(d->read.pkt_addr),
+				&len);
+
+			rte_memcpy(desc_addr, rte_pktmbuf_mtod(m, void *),
+				   data_len);
+
+			/* If pkt carries vlan info, post it to descriptor */
+			if (m->ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+				qword1 |= 1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT;
+				d->wb.qword0.lo_dword.l2tag1 =
+					rte_cpu_to_le_16(pkt->vlan_tci);
+			}
+			m = m->next;
+			/* Mark the last desc with EOP flag */
+			if (!m)
+				qword1 |=
+					((1 << IAVF_RX_DESC_STATUS_EOF_SHIFT)
+					 << IAVF_RXD_QW1_STATUS_SHIFT);
+
+			qword1 = qword1 |
+				((1 << IAVF_RX_DESC_STATUS_DD_SHIFT)
+				<< IAVF_RXD_QW1_STATUS_SHIFT) |
+				((data_len << IAVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+				& IAVF_RXD_QW1_LENGTH_PBUF_MASK);
+
+			rte_wmb();
+
+			d->wb.qword1.status_error_len = rte_cpu_to_le_64(qword1);
+
+			IAVF_BE_DUMP_RX_DESC(txq, d, head);
+
+			head++;
+			if (head >= txq->nb_tx_desc)
+				head = 0;
+
+			/* Prefetch next 4 RX descriptors */
+			if ((head & 0x3) == 0)
+				rte_prefetch0(d);
+		} while (m);
+
+		nb_avail -= pkt->nb_segs;
+
+		nb_tx++;
+
+		/* update stats */
+		ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *);
+		if (rte_is_multicast_ether_addr(ea)) {
+			if (rte_is_broadcast_ether_addr(ea))
+				txq->stats.sent_broad_num++;
+			else
+				txq->stats.sent_multi_num++;
+		}
+		txq->stats.sent_pkt_num++;
+		txq->stats.sent_bytes += pkt_len;
+		/* Free entire packet */
+		rte_pktmbuf_free(pkt);
+	}
+
+end_of_xmit:
+	txq->rx_head = head;
+	txq->stats.sent_miss_num += nb_pkts - nb_tx;
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+iavfbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		 uint16_t nb_pkts)
+{
+	struct iavfbe_tx_queue *txq = (struct iavfbe_tx_queue *)tx_queue;
+	struct rte_mbuf *m;
+	uint16_t data_len;
+	uint32_t pkt_len;
+	int i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		/* Check buffer len and packet len */
+		if (pkt_len > txq->max_pkt_size) {
+			rte_errno = EINVAL;
+			return i;
+		}
+		/* Cannot support a pkt using more than 5 descriptors */
+		if (m->nb_segs > AVF_RX_MAX_SEG) {
+			rte_errno = EINVAL;
+			return i;
+		}
+		do {
+			data_len = rte_pktmbuf_data_len(m);
+			if (data_len > txq->buffer_size) {
+				rte_errno = EINVAL;
+				return i;
+			}
+			m = m->next;
+		} while (m);
+	}
+
+	return i;
+}
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.h b/drivers/net/iavf_be/iavf_be_rxtx.h
index e8be3f532d..65fe7ed409 100644
--- a/drivers/net/iavf_be/iavf_be_rxtx.h
+++ b/drivers/net/iavf_be/iavf_be_rxtx.h
@@ -99,5 +99,65 @@ void iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_txq_info *qinfo);
+uint16_t iavfbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
+uint16_t iavfbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts);
+uint16_t iavfbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts);
+
+static inline
+void iavfbe_dump_rx_descriptor(struct iavfbe_tx_queue *txq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+	const union iavf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", txq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void iavfbe_dump_tx_descriptor(const struct iavfbe_rx_queue *rxq,
+			    const void *desc, uint16_t tx_id)
+{
+	const char *name;
+	const struct iavf_tx_desc *tx_desc = desc;
+	enum iavf_tx_desc_dtype_value type;
+
+	type = (enum iavf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case IAVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case IAVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+
+#ifdef DEBUG_DUMP_DESC
+#define IAVF_BE_DUMP_RX_DESC(rxq, desc, rx_id) \
+	iavfbe_dump_rx_descriptor(rxq, desc, rx_id)
+#define IAVF_BE_DUMP_TX_DESC(txq, desc, tx_id) \
+	iavfbe_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define IAVF_BE_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define IAVF_BE_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
 
 #endif /* _AVF_BE_RXTX_H_ */
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v1 5/5] doc: new net PMD iavf_be
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
                   ` (3 preceding siblings ...)
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 4/5] net/iavf_be: add Rx Tx burst support Jingjing Wu
@ 2020-12-19  7:54 ` Jingjing Wu
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2020-12-19  7:54 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                            |  6 +++
 doc/guides/nics/features/iavf_be.ini   | 11 ++++++
 doc/guides/nics/iavf_be.rst            | 53 ++++++++++++++++++++++++++
 doc/guides/nics/index.rst              |  1 +
 doc/guides/rel_notes/release_21_02.rst |  6 +++
 5 files changed, 77 insertions(+)
 create mode 100644 doc/guides/nics/features/iavf_be.ini
 create mode 100644 doc/guides/nics/iavf_be.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index bca206ba8f..5faf093571 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -707,6 +707,12 @@ F: drivers/net/iavf/
 F: drivers/common/iavf/
 F: doc/guides/nics/features/iavf*.ini
 
+Intel iavf_be
+M: Jingjing Wu <jingjing.wu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/iavf_be/
+F: doc/guides/nics/features/iavf_be*.ini
+
 Intel ice
 M: Qiming Yang <qiming.yang@intel.com>
 M: Qi Zhang <qi.z.zhang@intel.com>
diff --git a/doc/guides/nics/features/iavf_be.ini b/doc/guides/nics/features/iavf_be.ini
new file mode 100644
index 0000000000..8528695d00
--- /dev/null
+++ b/doc/guides/nics/features/iavf_be.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'iavf_be' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Basic stats          = Y
+Scattered Rx         = Y
+x86-64               = Y
diff --git a/doc/guides/nics/iavf_be.rst b/doc/guides/nics/iavf_be.rst
new file mode 100644
index 0000000000..5195baec25
--- /dev/null
+++ b/doc/guides/nics/iavf_be.rst
@@ -0,0 +1,53 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Intel Corporation.
+
+Poll Mode Driver for Emulated Backend of Intel® AVF 
+===================================================
+
+Intel® AVF is an Ethernet SR-IOV Virtual Function with the same
+device id (8086:1889) on different Intel Ethernet Controller.
+
+Emulated Backend of Intel® AVF is software emulated device to provide
+IAVF compatible layout and acceleartion to the consumer who is using IAVF.
+The communication is using vfio-user protocol as transport mechanism. 
+While, the Backend PMD driver is based on *librte_vfio_user* and *librte_emudev* libraries.
+
+PMD arguments
+-------------
+
+Below devargs are provided to setup iavf_be device in DPDK:
+
+#.  ``emu``:
+
+    The emudev name it depends on.
+    (required)
+
+#.  ``mac``:
+
+    It is the MAC address assigned to it, and Front End device would consider it as its default MAC. If no set, driver would take a random one.
+    (optional)
+
+Set up an iavf_be interface
+---------------------------
+
+The following example will set up an iavf_be interface in DPDK:
+
+.. code-block:: console
+
+    --vdev emu_iavf0,sock=/tmp/to/socket/emu_iavf0,queues=4 --vdev net_iavfbe0,emu=emu_iavf0,mac=00:11:22:33:44:55
+
+Features and Limitations of iavf_be PMD
+---------------------------------------
+Currently, the iavf_be PMD provides the basic functionality of packet reception, transmission and event handling.
+
+*   It has multiple queues support.
+
+*   It supports Base mode virtchnl messages processing.
+
+*   Don't need to stop RX/TX manually, stop guest or iavf driver on guest instead.
+
+*   It is running in Polling mode, no RX interrupt support.
+
+*   No MAC VLAN filtering support.
+
+*   No classification offload support. 
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 3443617755..bd764ccbb3 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -30,6 +30,7 @@ Network Interface Controller Drivers
     hinic
     hns3
     i40e
+    iavf_be
     ice
     igb
     igc
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index b310b67b7d..bd14d55fc6 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -83,6 +83,12 @@ New Features
 
   See :doc:`../prog_guide/emudev` for more information.
 
+* **Added iavf_be net driver.**
+
+  Added a Polling Mode Driver iavf_be as software backend for Intel® AVF Ethernet device.
+
+  See :doc:`../nics/iavf_be.rst` for more information.
+
 Removed Items
 -------------
 
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver
  2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
                   ` (4 preceding siblings ...)
  2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 5/5] doc: new net PMD iavf_be Jingjing Wu
@ 2021-01-07  7:14 ` Jingjing Wu
  2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 1/6] net/iavf_be: " Jingjing Wu
                     ` (5 more replies)
  5 siblings, 6 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

This series introduces a net device driver called iavfbe which is working
as datapath driver for emulated iavf type device. It provides basic function
following Intel® Ethernet Adaptive Virtual Function specification, including
recevice/transmit packets and virtchnl control messages handling.
The driver enabling work is based on the framework mentioned in:
  [RFC 0/2] Add device emulation support in DPDK
  http://patchwork.dpdk.org/cover/75549/

                    +------------------------------------------------------+
                    |   +---------------+      +---------------+           |
                    |   | iavf_emudev   |      | iavfbe_ethdev |           |
                    |   |    driver     |      |     driver    |           |
                    |   +---------------+      +---------------+           |
                    |           |                       |                  |
                    | ------------------------------------------- VDEV BUS |
                    |           |                       |                  |
                    |   +---------------+       +--------------+           |
+--------------+    |   | vdev:         |       | vdev:        |           |
| +----------+ |    |   | /path/to/vfio |       |iavf_emudev_# |           |
| | Generic  | |    |   +---------------+       +--------------+           |
| | vfio-dev | |    |           |                                          |
| +----------+ |    |           |                                          |
| +----------+ |    |      +----------+                                    |
| | vfio-user| |    |      | vfio-user|                                    |
| | client   | |<---|----->| server   |                                    |
| +----------+ |    |      +----------+                                    |
| QEMU/DPDK    |    | DPDK                                                 |
+--------------+    +------------------------------------------------------+


This series depends on patch serieses:
  [0/9] Introduce vfio-user library:
  http://patchwork.dpdk.org/cover/85389/
  [0/8]Introduce emudev library and iavf emudev driver
  http://patchwork.dpdk.org/cover/85488/

v2:
 - extend to support iavf rx interrupt
 - extend to support control queue interrupt
 - rename some Macros of header file
 - fix lock and init in virchnl about queues
 - fix some typo

Jingjing Wu (6):
  net/iavf_be: introduce iavf backend driver
  net/iavf_be: control queue enabling
  net/iavf_be: virtchnl messages process
  net/iavf_be: add Rx Tx burst support
  net/iavf_be: extend backend to support iavf rxq_irq
  doc: new net PMD iavf_be

 MAINTAINERS                            |    6 +
 doc/guides/nics/features/iavf_be.ini   |   11 +
 doc/guides/nics/iavf_be.rst            |   53 ++
 doc/guides/nics/index.rst              |    1 +
 doc/guides/rel_notes/release_21_02.rst |    6 +
 drivers/net/iavf_be/iavf_be.h          |  109 +++
 drivers/net/iavf_be/iavf_be_ethdev.c   |  964 ++++++++++++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.c     |  511 +++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.h     |  165 ++++
 drivers/net/iavf_be/iavf_be_vchnl.c    | 1113 ++++++++++++++++++++++++
 drivers/net/iavf_be/meson.build        |   14 +
 drivers/net/iavf_be/version.map        |    3 +
 drivers/net/meson.build                |    1 +
 13 files changed, 2957 insertions(+)
 create mode 100644 doc/guides/nics/features/iavf_be.ini
 create mode 100644 doc/guides/nics/iavf_be.rst
 create mode 100644 drivers/net/iavf_be/iavf_be.h
 create mode 100644 drivers/net/iavf_be/iavf_be_ethdev.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.h
 create mode 100644 drivers/net/iavf_be/iavf_be_vchnl.c
 create mode 100644 drivers/net/iavf_be/meson.build
 create mode 100644 drivers/net/iavf_be/version.map

-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] net/iavf_be: introduce iavf backend driver
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
@ 2021-01-07  7:14   ` Jingjing Wu
  2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 2/6] net/iavf_be: control queue enabling Jingjing Wu
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu, Kun Qiu

Introduce driver for iavf backend vdev which is based on
vfio-user protocol and emudev libs.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Kun Qiu <kun.qiu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  39 ++++
 drivers/net/iavf_be/iavf_be_ethdev.c | 330 +++++++++++++++++++++++++++
 drivers/net/iavf_be/meson.build      |  12 +
 drivers/net/iavf_be/version.map      |   3 +
 drivers/net/meson.build              |   1 +
 5 files changed, 385 insertions(+)
 create mode 100644 drivers/net/iavf_be/iavf_be.h
 create mode 100644 drivers/net/iavf_be/iavf_be_ethdev.c
 create mode 100644 drivers/net/iavf_be/meson.build
 create mode 100644 drivers/net/iavf_be/version.map

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
new file mode 100644
index 0000000000..956955786a
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _IAVF_BE_H_
+#define _IAVF_BE_H_
+
+/* Structure to store private data for backend instance*/
+struct iavfbe_adapter {
+	struct rte_eth_dev *eth_dev;
+	struct rte_emudev *emu_dev;
+	uint16_t edev_id;  /* Emulated Device ID */
+	struct rte_emudev_info dev_info;
+
+	uint16_t nb_qps;
+	bool link_up;
+	int cq_irqfd;
+	rte_atomic32_t irq_enable;
+
+	uint8_t unicast_promisc:1,
+		multicast_promisc:1,
+		vlan_filter:1,
+		vlan_strip:1;
+
+	int adapter_stopped;
+	uint8_t *reset; /* Reset status */
+	volatile int started;
+};
+
+#define IAVFBE_DEV_PRIVATE_TO_ADAPTER(adapter) \
+	((struct iavfbe_adapter *)adapter)
+
+int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+
+extern int iavfbe_logtype;
+#define IAVF_BE_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, iavfbe_logtype, "%s(): " fmt "\n", \
+		__func__, ## args)
+#endif /* _IAVF_BE_H_ */
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
new file mode 100644
index 0000000000..3d5ca34ec0
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -0,0 +1,330 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <sys/queue.h>
+#include <unistd.h>
+#include <inttypes.h>
+
+#include <rte_kvargs.h>
+#include <rte_ethdev_driver.h>
+#include <rte_bus_vdev.h>
+#include <rte_ethdev_vdev.h>
+#include <rte_emudev.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include "iavf_be.h"
+
+#define AVFBE_EDEV_ID_ARG "emu"
+#define AVFBE_MAC_ARG "mac"
+
+int iavfbe_logtype;
+
+static const char *iavfbe_valid_arg[] = {
+	AVFBE_EDEV_ID_ARG,
+	AVFBE_MAC_ARG,
+	NULL
+};
+
+static struct rte_eth_link iavfbe_link = {
+	.link_speed = ETH_SPEED_NUM_NONE,
+	.link_duplex = ETH_LINK_FULL_DUPLEX,
+	.link_status = ETH_LINK_DOWN
+};
+
+static int iavfbe_dev_configure(struct rte_eth_dev *dev);
+static int iavfbe_dev_close(struct rte_eth_dev *dev);
+static int iavfbe_dev_start(struct rte_eth_dev *dev);
+static int iavfbe_dev_stop(struct rte_eth_dev *dev);
+static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
+				struct rte_eth_dev_info *dev_info);
+static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops iavfbe_eth_dev_ops = {
+	.dev_configure              = iavfbe_dev_configure,
+	.dev_close                  = iavfbe_dev_close,
+	.dev_start                  = iavfbe_dev_start,
+	.dev_stop                   = iavfbe_dev_stop,
+	.dev_infos_get              = iavfbe_dev_info_get,
+	.link_update                = iavfbe_dev_link_update,
+};
+
+static int
+iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_info *dev_info)
+{
+	dev_info->max_rx_queues = 0;
+	dev_info->max_tx_queues = 0;
+	dev_info->min_rx_bufsize = 0;
+	dev_info->max_rx_pktlen = 0;
+
+	return 0;
+}
+
+
+static int
+iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+	/* Any configuration? */
+	return 0;
+}
+
+static int
+iavfbe_dev_start(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	adapter->adapter_stopped = 0;
+
+	return 0;
+}
+
+static int
+iavfbe_dev_stop(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (adapter->adapter_stopped == 1)
+		return 0;
+
+	adapter->adapter_stopped = 1;
+
+	return 0;
+}
+
+int
+iavfbe_dev_link_update(struct rte_eth_dev *dev,
+		       __rte_unused int wait_to_complete)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct rte_eth_link new_link = dev->data->dev_link;
+
+	/* Only link status is updated */
+	new_link.link_status = ad->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+
+	if (rte_atomic64_cmpset((volatile uint64_t *)&dev->data->dev_link,
+				*(uint64_t *)&dev->data->dev_link,
+				*(uint64_t *)&new_link) == 0)
+		return -EAGAIN;
+
+	return 0;
+}
+
+static int
+iavfbe_dev_close(struct rte_eth_dev *dev)
+{
+	iavfbe_destroy_adapter(dev);
+	rte_eth_dev_release_port(dev);
+
+	return 0;
+}
+
+static inline int
+save_str(const char *key __rte_unused, const char *value,
+	void *extra_args)
+{
+	const char **str = extra_args;
+
+	if (value == NULL)
+		return -1;
+
+	*str = value;
+
+	return 0;
+}
+
+static inline int
+set_mac(const char *key __rte_unused, const char *value, void *extra_args)
+{
+	struct rte_ether_addr *ether_addr = (struct rte_ether_addr *)extra_args;
+
+	if (rte_ether_unformat_addr(value, ether_addr) < 0)
+		IAVF_BE_LOG(ERR, "Failed to parse mac '%s'.", value);
+	return 0;
+}
+
+static int
+iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
+		    struct rte_emudev *emu_dev,
+		    struct rte_ether_addr *ether_addr __rte_unused)
+{
+	struct iavfbe_adapter *adapter;
+	struct rte_iavf_emu_config *conf;
+	int ret;
+
+	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	adapter->eth_dev = eth_dev;
+	adapter->emu_dev = emu_dev;
+	adapter->edev_id = emu_dev->dev_id;
+	emu_dev->backend_priv = (void *)adapter;
+	rte_wmb();
+
+	conf = rte_zmalloc_socket("iavfbe", sizeof(*conf), 0,
+				  eth_dev->device->numa_node);
+	if (!conf) {
+		IAVF_BE_LOG(ERR, "Fail to allocate emulated "
+			"iavf configuration");
+		return -ENOMEM;
+	}
+	adapter->dev_info.dev_priv = (rte_emudev_obj_t)conf;
+
+	ret = rte_emudev_get_dev_info(emu_dev->dev_id, &adapter->dev_info);
+	if (ret)
+		goto err_info;
+
+	adapter->nb_qps = conf->qp_num;
+	return 0;
+
+err_info:
+	rte_free(conf);
+	return ret;
+}
+
+static void
+iavfbe_destroy_adapter(struct rte_eth_dev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	if (adapter->emu_dev) {
+		adapter->emu_dev->backend_priv = NULL;
+		rte_wmb();
+	}
+
+	rte_free(adapter->dev_info.dev_priv);
+}
+
+static int
+eth_dev_iavfbe_create(struct rte_vdev_device *dev,
+		      struct rte_emudev *emu_dev,
+		      struct rte_ether_addr *addr)
+{
+	struct rte_eth_dev *eth_dev = NULL;
+	struct iavfbe_adapter *adapter;
+	int ret = 0;
+
+	if (dev->device.numa_node == SOCKET_ID_ANY)
+		dev->device.numa_node = rte_socket_id();
+
+	IAVF_BE_LOG(INFO, "Creating iavfbe ethdev on numa socket %u\n",
+			dev->device.numa_node);
+
+	eth_dev = rte_eth_vdev_allocate(dev, sizeof(*adapter));
+	if (!eth_dev) {
+		IAVF_BE_LOG(ERR, "fail to allocate eth_dev\n");
+		ret = -ENOMEM;
+	}
+
+	iavfbe_init_adapter(eth_dev, emu_dev, addr);
+	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	/* Initializing default address with devarg */
+	eth_dev->data->mac_addrs =
+		rte_zmalloc_socket(rte_vdev_device_name(dev),
+				   sizeof(struct rte_ether_addr), 0,
+				   dev->device.numa_node);
+	if (eth_dev->data->mac_addrs == NULL) {
+		IAVF_BE_LOG(ERR, "fail to allocate eth_addr\n");
+		ret = -ENOMEM;
+	}
+	rte_ether_addr_copy(addr, &eth_dev->data->mac_addrs[0]);
+
+	eth_dev->dev_ops = &iavfbe_eth_dev_ops;
+
+	eth_dev->data->dev_link = iavfbe_link;
+	eth_dev->data->numa_node = dev->device.numa_node;
+
+	rte_eth_dev_probing_finish(eth_dev);
+
+	return ret;
+}
+
+static int
+rte_pmd_iavfbe_probe(struct rte_vdev_device *dev)
+{
+	struct rte_kvargs *kvlist = NULL;
+	struct rte_emudev *emu_dev;
+	const char *emudev_name;
+	struct rte_ether_addr ether_addr;
+	int ret = 0;
+
+	if (!dev)
+		return -EINVAL;
+
+	IAVF_BE_LOG(INFO, "Initializing pmd_iavfbe for %s\n",
+		    dev->device.name);
+
+	kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), iavfbe_valid_arg);
+	if (kvlist == NULL)
+		return -1;
+
+	if (rte_kvargs_count(kvlist, AVFBE_EDEV_ID_ARG) == 1) {
+		ret = rte_kvargs_process(kvlist, AVFBE_EDEV_ID_ARG,
+					 &save_str, &emudev_name);
+		if (ret < 0)
+			goto free_kvlist;
+	} else {
+		ret = -EINVAL;
+		goto free_kvlist;
+	}
+
+	if (rte_kvargs_count(kvlist, AVFBE_MAC_ARG) == 1) {
+		ret = rte_kvargs_process(kvlist, AVFBE_MAC_ARG,
+					 &set_mac, &ether_addr);
+		if (ret < 0)
+			goto free_kvlist;
+	} else
+		rte_eth_random_addr(&ether_addr.addr_bytes[0]);
+
+	emu_dev = rte_emudev_allocated(emudev_name);
+	if (!emu_dev || strcmp(emu_dev->dev_info.dev_type, RTE_IAVF_EMUDEV_TYPE)) {
+		IAVF_BE_LOG(ERR, "emulated device isn't avf device\n");
+		ret = -EINVAL;
+		goto free_kvlist;
+	}
+
+	ret = eth_dev_iavfbe_create(dev, emu_dev, &ether_addr);
+
+free_kvlist:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static int
+rte_pmd_iavfbe_remove(struct rte_vdev_device *dev)
+{
+	const char *name;
+	struct rte_eth_dev *eth_dev = NULL;
+
+	name = rte_vdev_device_name(dev);
+
+	eth_dev = rte_eth_dev_allocated(name);
+	if (!eth_dev)
+		return 0;
+
+	iavfbe_dev_close(eth_dev);
+
+	return 0;
+}
+
+static struct rte_vdev_driver pmd_iavfbe_drv = {
+	.probe = rte_pmd_iavfbe_probe,
+	.remove = rte_pmd_iavfbe_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(net_iavfbe, pmd_iavfbe_drv);
+RTE_PMD_REGISTER_ALIAS(net_iavfbe, eth_iavfbe);
+RTE_PMD_REGISTER_PARAM_STRING(net_iavfbe,
+			      AVFBE_EDEV_ID_ARG "=<str>"
+			      AVFBE_MAC_ARG "=xx:xx:xx:xx:xx:xx");
+
+RTE_INIT(iavfbe_init_log)
+{
+	iavfbe_logtype = rte_log_register("pmd.net.iavfbe");
+	if (iavfbe_logtype >= 0)
+		rte_log_set_level(iavfbe_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
new file mode 100644
index 0000000000..24c625fa18
--- /dev/null
+++ b/drivers/net/iavf_be/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+
+cflags += ['-Wno-strict-aliasing']
+
+includes += include_directories('../../common/iavf')
+
+deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
+
+sources = files(
+	'iavf_be_ethdev.c',
+)
diff --git a/drivers/net/iavf_be/version.map b/drivers/net/iavf_be/version.map
new file mode 100644
index 0000000000..4a76d1d52d
--- /dev/null
+++ b/drivers/net/iavf_be/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+	local: *;
+};
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 29f4777500..4676ef4b3e 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -24,6 +24,7 @@ drivers = ['af_packet',
 	'hinic',
 	'hns3',
 	'iavf',
+	'iavf_be',
 	'ice',
 	'igc',
 	'ipn3ke',
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] net/iavf_be: control queue enabling
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
  2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 1/6] net/iavf_be: " Jingjing Wu
@ 2021-01-07  7:14   ` Jingjing Wu
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 3/6] net/iavf_be: virtchnl messages process Jingjing Wu
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

1. Set up control rx/tx queues.
2. Emu device callback functions implemention.
3. Enabling recv/send msg through control queue.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  38 ++++
 drivers/net/iavf_be/iavf_be_ethdev.c | 321 ++++++++++++++++++++++++++-
 drivers/net/iavf_be/iavf_be_vchnl.c  | 287 ++++++++++++++++++++++++
 drivers/net/iavf_be/meson.build      |   1 +
 4 files changed, 645 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/iavf_be/iavf_be_vchnl.c

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
index 956955786a..c182d9558a 100644
--- a/drivers/net/iavf_be/iavf_be.h
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -5,13 +5,48 @@
 #ifndef _IAVF_BE_H_
 #define _IAVF_BE_H_
 
+#define IAVF_BE_AQ_BUF_SZ            4096
+#define IAVF_BE_32_TO_64(hi, lo) ((((uint64_t)(hi)) << 32) + (lo))
+
+#define IAVFBE_READ_32(addr)        \
+	rte_le_to_cpu_32(*(volatile uint32_t *)(addr))
+#define IAVFBE_WRITE_32(addr, val)  \
+	*(volatile uint32_t *)(addr) = rte_cpu_to_le_32(val);
+
+struct iavfbe_control_q {
+	rte_spinlock_t access_lock;
+	struct rte_emudev_q_info q_info;
+	struct iavf_aq_desc *ring;
+	uint64_t p_ring_addr;	/* Guest physical address of the ring */
+	uint16_t len;
+	volatile uint8_t *tail;
+	volatile uint8_t *head;
+
+	uint16_t next_to_use;
+	uint16_t next_to_clean;
+
+	uint32_t cmd_retval; /* return value of the cmd response from PF */
+	uint8_t *aq_req;     /* buffer to store the adminq request from VF, NULL if arq */
+};
+
+/* Control queue structure of iavf */
+struct iavfbe_controlq_info {
+	struct iavfbe_control_q asq;
+	struct iavfbe_control_q arq;
+};
+
 /* Structure to store private data for backend instance*/
 struct iavfbe_adapter {
 	struct rte_eth_dev *eth_dev;
 	struct rte_emudev *emu_dev;
 	uint16_t edev_id;  /* Emulated Device ID */
 	struct rte_emudev_info dev_info;
+	struct rte_iavf_emu_mem *mem_table;
 
+	struct iavfbe_controlq_info cq_info; /* Control/Admin Queue info*/
+	/* Adminq handle thread info */
+	volatile int thread_status;
+	pthread_t thread_id;
 	uint16_t nb_qps;
 	bool link_up;
 	int cq_irqfd;
@@ -31,6 +66,9 @@ struct iavfbe_adapter {
 	((struct iavfbe_adapter *)adapter)
 
 int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+void iavfbe_handle_virtchnl_msg(void *arg);
+void iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock);
+void iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock);
 
 extern int iavfbe_logtype;
 #define IAVF_BE_LOG(level, fmt, args...) \
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 3d5ca34ec0..2ab66f889d 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -14,6 +14,7 @@
 #include <rte_iavf_emu.h>
 
 #include <iavf_type.h>
+#include <virtchnl.h>
 #include "iavf_be.h"
 
 #define AVFBE_EDEV_ID_ARG "emu"
@@ -33,6 +34,12 @@ static struct rte_eth_link iavfbe_link = {
 	.link_status = ETH_LINK_DOWN
 };
 
+static int iavfbe_new_device(struct rte_emudev *dev);
+static void iavfbe_destroy_device(struct rte_emudev *dev);
+static int iavfbe_update_device(struct rte_emudev *dev);
+static int iavfbe_lock_dp(struct rte_emudev *dev, int lock);
+static int iavfbe_reset_device(struct rte_emudev *dev);
+
 static int iavfbe_dev_configure(struct rte_eth_dev *dev);
 static int iavfbe_dev_close(struct rte_eth_dev *dev);
 static int iavfbe_dev_start(struct rte_eth_dev *dev);
@@ -41,6 +48,16 @@ static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
 				struct rte_eth_dev_info *dev_info);
 static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
 
+struct rte_iavf_emu_notify_ops iavfbe_notify_ops = {
+	.device_ready = iavfbe_new_device,
+	.device_destroy = iavfbe_destroy_device,
+	.update_status = iavfbe_update_device,
+	.device_start = NULL,
+	.device_stop = NULL,
+	.lock_dp = iavfbe_lock_dp,
+	.reset_device = iavfbe_reset_device,
+};
+
 static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 	.dev_configure              = iavfbe_dev_configure,
 	.dev_close                  = iavfbe_dev_close,
@@ -51,7 +68,8 @@ static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 };
 
 static int
-iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_info *dev_info)
+iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,
+		    struct rte_eth_dev_info *dev_info)
 {
 	dev_info->max_rx_queues = 0;
 	dev_info->max_tx_queues = 0;
@@ -61,7 +79,6 @@ iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,  struct rte_eth_dev_i
 	return 0;
 }
 
-
 static int
 iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
 {
@@ -122,6 +139,241 @@ iavfbe_dev_close(struct rte_eth_dev *dev)
 	return 0;
 }
 
+/* Called when emulation device is ready */
+static int
+iavfbe_new_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	struct rte_emudev_irq_info irq_info;
+	struct rte_emudev_q_info q_info;
+	struct rte_emudev_db_info db_info;
+	uint64_t addr;
+	uint16_t i;
+
+	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	for (i = 0; i < RTE_IAVF_EMU_ADMINQ_NUM; i++) {
+		if (rte_emudev_get_queue_info(dev->dev_id, i, &q_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get queue info of qid %d\n", i);
+			return -1;
+		}
+		/*
+		 * Only doorbell of LANQ is viable when device ready.
+		 * Other info of LANQ is acquired through virtchnl.
+		 *
+		 * AdminQ's irq and doorbell will both be ready in this stage.
+		 */
+		if (rte_emudev_get_db_info(dev->dev_id, q_info.doorbell_id,
+					   &db_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get doorbell info of qid %d\n", i);
+			return -1;
+		}
+
+		/* Only support memory based doorbell for now */
+		if (db_info.flag & RTE_EMUDEV_DB_FD ||
+			db_info.data.mem.size != 4)
+			return -1;
+
+		if (i == RTE_IAVF_EMU_ADMINQ_TXQ) {
+			adapter->cq_info.asq.tail =
+				(uint8_t *)db_info.data.mem.base;
+		} else {
+			adapter->cq_info.arq.tail =
+				(uint8_t *)db_info.data.mem.base;
+
+			if (rte_emudev_get_irq_info(dev->dev_id,
+				q_info.irq_vector, &irq_info)) {
+				IAVF_BE_LOG(ERR,
+					"Can not get irq info of qid %d\n", i);
+				return -1;
+			}
+
+			adapter->cq_irqfd = irq_info.eventfd;
+		}
+	}
+
+	/* Lan queue info would be set when queue setup */
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_ASQ_HEAD,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get asq head\n");
+		return -1;
+	}
+	adapter->cq_info.asq.head = (uint8_t *)(uintptr_t)addr;
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_ARQ_HEAD,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->cq_info.arq.head = (uint8_t *)(uintptr_t)addr;
+
+	iavfbe_reset_asq(adapter, false);
+	iavfbe_reset_arq(adapter, false);
+
+	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_RESET,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->reset = (uint8_t *)(uintptr_t)addr;
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_COMPLETED);
+	adapter->started = 1;
+	printf("NEW DEVICE: memtable, %p\n", adapter->mem_table);
+
+	return 0;
+}
+
+static void
+iavfbe_destroy_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* TODO: Disable all lan queues */
+
+	/* update link status */
+	adapter->link_up = false;
+	iavfbe_dev_link_update(adapter->eth_dev, 0);
+}
+
+static int
+iavfbe_update_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	struct rte_emudev_q_info q_info;
+	struct rte_emudev_irq_info irq_info;
+
+	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	if (rte_emudev_get_queue_info(dev->dev_id,
+		RTE_IAVF_EMU_ADMINQ_RXQ, &q_info)) {
+		IAVF_BE_LOG(ERR, "Can not get queue info of qid %d\n",
+			RTE_IAVF_EMU_ADMINQ_RXQ);
+		return -1;
+	}
+
+	if (rte_emudev_get_irq_info(dev->dev_id, q_info.irq_vector, &irq_info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info of qid %d\n",
+			RTE_IAVF_EMU_ADMINQ_RXQ);
+		return -1;
+	}
+
+	/* TODO: Lan queue info update */
+	adapter->cq_irqfd = irq_info.eventfd;
+	rte_atomic32_set(&adapter->irq_enable, irq_info.enable);
+
+	return 0;
+}
+
+static int
+iavfbe_lock_dp(struct rte_emudev *dev, int lock)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* Acquire/Release lock of control queue and lan queue */
+
+	if (lock) {
+		/* TODO: Lan queue lock */
+		rte_spinlock_lock(&adapter->cq_info.asq.access_lock);
+		rte_spinlock_lock(&adapter->cq_info.arq.access_lock);
+	} else {
+		/* TODO: Lan queue unlock */
+		rte_spinlock_unlock(&adapter->cq_info.asq.access_lock);
+		rte_spinlock_unlock(&adapter->cq_info.arq.access_lock);
+	}
+
+	return 0;
+}
+
+void
+iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock)
+{
+	struct iavfbe_control_q *q;
+
+	q = &adapter->cq_info.asq;
+
+	if (lock)
+		rte_spinlock_lock(&q->access_lock);
+
+	if (q->aq_req)
+		memset(q->aq_req, 0, IAVF_BE_AQ_BUF_SZ);
+	memset(&q->q_info, 0, sizeof(q->q_info));
+	q->ring = NULL;
+	q->p_ring_addr = 0;
+	q->len = 0;
+	q->next_to_clean = 0;
+	q->cmd_retval = 0;
+	if (q->head)
+		IAVFBE_WRITE_32(q->head, 0);
+
+	/* Do not reset tail as it init by FE */
+
+	if (lock)
+		rte_spinlock_unlock(&q->access_lock);
+
+}
+
+void
+iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock)
+{
+	struct iavfbe_control_q *q;
+
+	q = &adapter->cq_info.arq;
+
+	if (lock)
+		rte_spinlock_lock(&q->access_lock);
+
+	memset(&q->q_info, 0, sizeof(q->q_info));
+	q->ring = NULL;
+	q->p_ring_addr = 0;
+	q->len = 0;
+	q->next_to_use = 0;
+	if (q->head)
+		IAVFBE_WRITE_32(q->head, 0);
+
+	/* Do not reset tail as it init by FE */
+
+	if (lock)
+		rte_spinlock_unlock(&q->access_lock);
+
+}
+
+static int
+iavfbe_reset_device(struct rte_emudev *dev)
+{
+	struct iavfbe_adapter *adapter =
+		(struct iavfbe_adapter *)dev->backend_priv;
+
+	/* Lock has been acquired by lock_dp */
+	/* TODO: reset all queues */
+	iavfbe_reset_asq(adapter, false);
+	iavfbe_reset_arq(adapter, false);
+
+	adapter->link_up = 0;
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+	adapter->cq_irqfd = -1;
+	adapter->adapter_stopped = 1;
+
+	return 0;
+}
+
 static inline int
 save_str(const char *key __rte_unused, const char *value,
 	void *extra_args)
@@ -146,6 +398,34 @@ set_mac(const char *key __rte_unused, const char *value, void *extra_args)
 	return 0;
 }
 
+static int
+iavfbe_driver_admq_session_start(struct rte_eth_dev *eth_dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+	int ret;
+
+	adapter->thread_status = 1;
+	ret = pthread_create(&adapter->thread_id, NULL,
+			     (void *)iavfbe_handle_virtchnl_msg,
+			     eth_dev);
+	if (ret) {
+		IAVF_BE_LOG(ERR, "Can't create a thread\n");
+		adapter->thread_status = 0;
+	}
+	return ret;
+}
+
+static void
+iavfbe_driver_admq_session_stop(struct rte_eth_dev *eth_dev)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
+
+	adapter->thread_status = 0;
+	pthread_join(adapter->thread_id, NULL);
+}
+
 static int
 iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 		    struct rte_emudev *emu_dev,
@@ -177,8 +457,44 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 		goto err_info;
 
 	adapter->nb_qps = conf->qp_num;
+
+	adapter->cq_info.asq.aq_req =
+		rte_zmalloc_socket("iavfbe", IAVF_BE_AQ_BUF_SZ, 0,
+				   eth_dev->device->numa_node);
+	if (!adapter->cq_info.asq.aq_req) {
+		IAVF_BE_LOG(ERR, "Fail to allocate buffer for"
+				 " control queue request");
+		ret = -ENOMEM;
+		goto err_aq;
+	}
+
+	/* Init lock */
+	rte_spinlock_init(&adapter->cq_info.asq.access_lock);
+	rte_spinlock_init(&adapter->cq_info.arq.access_lock);
+
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+
+	/* No need to map region or init admin queue here now. They would be
+	 * done when emu device is ready.*/
+
+	/* Currently RSS is not necessary for device emulator */
+
+	/* Subscribe event from emulated avf device */
+	rte_emudev_subscribe_event(emu_dev->dev_id, &iavfbe_notify_ops);
+
+	/* Create a thread for virtchnnl command process */
+	if (iavfbe_driver_admq_session_start(eth_dev)) {
+		IAVF_BE_LOG(ERR, "iavfbe driver adminq session start failed");
+		goto err_thread;
+	}
+
 	return 0;
 
+err_thread:
+err_aq:
 err_info:
 	rte_free(conf);
 	return ret;
@@ -190,6 +506,7 @@ iavfbe_destroy_adapter(struct rte_eth_dev *dev)
 	struct iavfbe_adapter *adapter =
 		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 
+	iavfbe_driver_admq_session_stop(dev);
 	if (adapter->emu_dev) {
 		adapter->emu_dev->backend_priv = NULL;
 		rte_wmb();
diff --git a/drivers/net/iavf_be/iavf_be_vchnl.c b/drivers/net/iavf_be/iavf_be_vchnl.c
new file mode 100644
index 0000000000..56b8a485a5
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_vchnl.c
@@ -0,0 +1,287 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/mman.h>
+#include <sys/eventfd.h>
+
+#include <rte_kvargs.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_dev.h>
+#include <rte_emudev.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include <virtchnl.h>
+
+#include "iavf_be.h"
+
+static inline void
+iavfbe_notify(struct iavfbe_adapter *adapter)
+{
+	if (adapter->cq_irqfd == -1 ||
+	    !rte_atomic32_read(&adapter->irq_enable))
+		return;
+
+	if (eventfd_write(adapter->cq_irqfd, (eventfd_t)1) < 0)
+		IAVF_BE_LOG(ERR, "failed to notify front-end: %s",
+					strerror(errno));
+}
+
+__rte_unused  static int
+iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
+			uint32_t opcode,
+			uint32_t retval,
+			uint8_t *msg,
+			uint16_t msglen)
+{
+	struct iavfbe_control_q *arq = &adapter->cq_info.arq;
+	struct iavf_aq_desc *desc;
+	enum iavf_status status = IAVF_SUCCESS;
+	uint32_t dma_buff_low, dma_buff_high;
+	uint16_t ntu;
+
+	if (msglen > IAVF_BE_AQ_BUF_SZ) {
+		IAVF_BE_LOG(ERR, "ARQ: msg is tool long: %u\n", msglen);
+		status = IAVF_ERR_INVALID_SIZE;
+		goto arq_send_error;
+	}
+
+	rte_spinlock_lock(&arq->access_lock);
+
+	ntu = arq->next_to_use;
+	if (ntu == IAVFBE_READ_32(arq->tail)) {
+		IAVF_BE_LOG(ERR, "ARQ: No free desc\n");
+		status = IAVF_ERR_QUEUE_EMPTY;
+		goto arq_send_error;
+	}
+	desc = &arq->ring[ntu];
+	dma_buff_low = LE32_TO_CPU(desc->params.external.addr_low);
+	dma_buff_high = LE32_TO_CPU(desc->params.external.addr_high);
+
+	/* Prepare descriptor */
+	memset((void *)desc, 0, sizeof(struct iavf_aq_desc));
+	desc->opcode = CPU_TO_LE16(iavf_aqc_opc_send_msg_to_vf);
+
+	desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_SI);
+	desc->cookie_high = CPU_TO_LE32(opcode);
+	desc->cookie_low = CPU_TO_LE32(retval);
+
+	if (msg && msglen) {
+		void *buf_va;
+		uint64_t buf_sz = msglen;
+
+		desc->flags |= CPU_TO_LE16((uint16_t)(IAVF_AQ_FLAG_BUF
+						| IAVF_AQ_FLAG_RD));
+		if (msglen > IAVF_AQ_LARGE_BUF)
+			desc->flags |= CPU_TO_LE16((uint16_t)IAVF_AQ_FLAG_LB);
+		desc->datalen = CPU_TO_LE16(msglen);
+
+		buf_va = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+			adapter->mem_table,
+			IAVF_BE_32_TO_64(dma_buff_high, dma_buff_low),
+			&buf_sz);
+		if (buf_sz != msglen)
+			goto arq_send_error;
+
+		rte_memcpy(buf_va, msg, msglen);
+	}
+	rte_wmb();
+
+	ntu++;
+	if (ntu == arq->len)
+		ntu = 0;
+	arq->next_to_use = ntu;
+	IAVFBE_WRITE_32(arq->head, arq->next_to_use);
+
+	iavfbe_notify(adapter);
+
+arq_send_error:
+	rte_spinlock_unlock(&arq->access_lock);
+	return status;
+}
+
+/* Read data in admin queue to get msg from vf driver */
+static enum iavf_status
+iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
+			struct iavf_arq_event_info *event)
+{
+	struct iavfbe_control_q *asq = &adapter->cq_info.asq;
+	struct iavf_aq_desc *desc;
+	enum virtchnl_ops opcode;
+	uint16_t ntc;
+	uint16_t datalen;
+	uint16_t flags;
+	int ret = IAVF_SUCCESS;
+
+	rte_spinlock_lock(&asq->access_lock);
+
+	ntc = asq->next_to_clean;
+
+	/* pre-clean the event info */
+	memset(&event->desc, 0, sizeof(event->desc));
+	event->msg_len = 0;
+
+	if (ntc == IAVFBE_READ_32(asq->tail)) {
+		/* nothing to do  */
+		ret = IAVF_ERR_ADMIN_QUEUE_NO_WORK;
+		goto end;
+	}
+	/* now get the next descriptor */
+	desc = &asq->ring[ntc];
+	rte_memcpy(&event->desc, desc, sizeof(struct iavf_aq_desc));
+	flags = LE16_TO_CPU(desc->flags);
+	datalen = LE16_TO_CPU(desc->datalen);
+	if (flags & IAVF_AQ_FLAG_RD) {
+		if (datalen > event->buf_len) {
+			ret = IAVF_ERR_BUF_TOO_SHORT;
+			goto end;
+		} else {
+			uint32_t reg1 = 0;
+			uint32_t reg2 = 0;
+			void *buf_va;
+			uint64_t buf_sz = datalen;
+
+			event->msg_len = datalen;
+			reg1 = LE32_TO_CPU(desc->params.external.addr_low);
+			reg2 = LE32_TO_CPU(desc->params.external.addr_high);
+			buf_va = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					IAVF_BE_32_TO_64(reg2, reg1), &buf_sz);
+			rte_memcpy(event->msg_buf, buf_va, event->msg_len);
+		}
+	}
+
+	ntc++;
+	if (ntc == asq->len)
+		ntc = 0;
+	asq->next_to_clean = ntc;
+
+	/* Write back to head and Desc with Flags.DD and Flags.CMP */
+	desc->flags |= IAVF_AQ_FLAG_DD | IAVF_AQ_FLAG_CMP;
+	rte_wmb();
+
+	IAVFBE_WRITE_32(asq->head, asq->next_to_clean);
+	opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event->desc.cookie_high);
+	asq->cmd_retval = (enum virtchnl_status_code)
+				rte_le_to_cpu_32(event->desc.cookie_low);
+
+	IAVF_BE_LOG(DEBUG, "AQ from pf carries opcode %u,virtchnl_op %u retval %d",
+		    event->desc.opcode, opcode, asq->cmd_retval);
+end:
+	rte_spinlock_unlock(&asq->access_lock);
+
+	return ret;
+}
+
+static inline int
+iavfbe_control_queue_remap(struct iavfbe_adapter *adapter,
+			  struct iavfbe_control_q *asq,
+			  struct iavfbe_control_q *arq)
+{
+	struct rte_emudev_q_info *asq_info;
+	struct rte_emudev_q_info *arq_info;
+	uint64_t len;
+	int ret;
+
+	asq_info = &adapter->cq_info.asq.q_info;
+	arq_info = &adapter->cq_info.arq.q_info;
+
+	ret = rte_emudev_get_queue_info(adapter->edev_id,
+				     RTE_IAVF_EMU_ADMINQ_TXQ,
+				     asq_info);
+	if (ret)
+		return IAVF_ERR_NOT_READY;
+
+	ret = rte_emudev_get_queue_info(adapter->edev_id,
+					RTE_IAVF_EMU_ADMINQ_RXQ,
+					arq_info);
+	if (ret)
+		return IAVF_ERR_NOT_READY;
+
+	rte_spinlock_lock(&asq->access_lock);
+
+	asq->p_ring_addr = asq_info->base;
+	asq->len = asq_info->size;
+	len = asq->len * sizeof(struct iavf_aq_desc);
+	asq->ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					asq->p_ring_addr, &len);
+	rte_spinlock_unlock(&asq->access_lock);
+
+	rte_spinlock_lock(&arq->access_lock);
+	arq->p_ring_addr = arq_info->base;
+	arq->len = arq_info->size;
+	len = arq->len * sizeof(struct iavf_aq_desc);
+	arq->ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+					adapter->mem_table,
+					arq->p_ring_addr, &len);
+	rte_spinlock_unlock(&arq->access_lock);
+
+	return 0;
+}
+
+void
+iavfbe_handle_virtchnl_msg(void *arg)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)arg;
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_control_q *arq = &adapter->cq_info.arq;
+	struct iavfbe_control_q *asq = &adapter->cq_info.asq;
+	struct iavf_arq_event_info info;
+	uint16_t aq_opc;
+	int ret;
+
+	info.buf_len = IAVF_BE_AQ_BUF_SZ;
+	info.msg_buf = adapter->cq_info.asq.aq_req;
+
+	while (adapter->thread_status) {
+		rte_delay_us_sleep(3000); /* sleep for 3 ms*/
+		/* Check if control queue is initilized */
+		if (adapter->started == 0)
+			continue;
+
+		/* remap every time */
+		ret = iavfbe_control_queue_remap(adapter, asq, arq);
+		if (ret ||
+		    !(asq->p_ring_addr && asq->len && asq->ring) ||
+		    !(arq->p_ring_addr && arq->len && arq->ring))
+			continue;
+
+		if (asq->next_to_clean == IAVFBE_READ_32(asq->tail))
+			/* nothing to do  */
+			continue;
+
+		ret = iavfbe_read_msg_from_vf(adapter, &info);
+		if (ret != IAVF_SUCCESS) {
+			IAVF_BE_LOG(DEBUG, "Failed to read msg"
+				    "from AdminQ");
+			break;
+		}
+		aq_opc = rte_le_to_cpu_16(info.desc.opcode);
+
+		switch (aq_opc) {
+		case iavf_aqc_opc_send_msg_to_pf:
+			/* Process msg from VF BE*/
+			break;
+		case iavf_aqc_opc_queue_shutdown:
+			iavfbe_reset_arq(adapter, true);
+			break;
+		case 0:
+			IAVF_BE_LOG(DEBUG, "NULL Request ignored");
+			break;
+		default:
+			IAVF_BE_LOG(ERR, "Unexpected Request 0x%04x ignored ",
+				    aq_opc);
+			break;
+		}
+	}
+	pthread_exit(0);
+}
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
index 24c625fa18..be13a2e492 100644
--- a/drivers/net/iavf_be/meson.build
+++ b/drivers/net/iavf_be/meson.build
@@ -9,4 +9,5 @@ deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
 
 sources = files(
 	'iavf_be_ethdev.c',
+	'iavf_be_vchnl.c',
 )
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] net/iavf_be: virtchnl messages process
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
  2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 1/6] net/iavf_be: " Jingjing Wu
  2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 2/6] net/iavf_be: control queue enabling Jingjing Wu
@ 2021-01-07  7:15   ` Jingjing Wu
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 4/6] net/iavf_be: add Rx Tx burst support Jingjing Wu
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

1. Process virtchnl messages from Front End.
2. Ethdev ops implemention for queues setup.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
---
 drivers/net/iavf_be/iavf_be.h        |  32 ++
 drivers/net/iavf_be/iavf_be_ethdev.c | 339 ++++++++++-
 drivers/net/iavf_be/iavf_be_rxtx.c   | 164 ++++++
 drivers/net/iavf_be/iavf_be_rxtx.h   | 105 ++++
 drivers/net/iavf_be/iavf_be_vchnl.c  | 826 ++++++++++++++++++++++++++-
 drivers/net/iavf_be/meson.build      |   1 +
 6 files changed, 1452 insertions(+), 15 deletions(-)
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.c
 create mode 100644 drivers/net/iavf_be/iavf_be_rxtx.h

diff --git a/drivers/net/iavf_be/iavf_be.h b/drivers/net/iavf_be/iavf_be.h
index c182d9558a..1ca316e3e9 100644
--- a/drivers/net/iavf_be/iavf_be.h
+++ b/drivers/net/iavf_be/iavf_be.h
@@ -8,6 +8,19 @@
 #define IAVF_BE_AQ_BUF_SZ            4096
 #define IAVF_BE_32_TO_64(hi, lo) ((((uint64_t)(hi)) << 32) + (lo))
 
+/* Default setting on number of VSIs that VF can contain */
+#define IAVF_BE_DEFAULT_VSI_NUM     1
+#define AVF_DEFAULT_MAX_MTU         1500
+
+/* Set the MAX queues to 16 and MAX vectors to 17
+ * as base mode virtchnl support 16 queue pairs mapping in max.
+ */
+#define IAVF_BE_MAX_NUM_QUEUES      16
+#define IAVF_BE_MAX_VECTORS         17
+#define IAVF_BE_BUF_SIZE_MIN        1024
+#define IAVF_BE_FRAME_SIZE_MAX      9728
+#define IAVF_BE_NUM_MACADDR_MAX     64
+
 #define IAVFBE_READ_32(addr)        \
 	rte_le_to_cpu_32(*(volatile uint32_t *)(addr))
 #define IAVFBE_WRITE_32(addr, val)  \
@@ -47,8 +60,15 @@ struct iavfbe_adapter {
 	/* Adminq handle thread info */
 	volatile int thread_status;
 	pthread_t thread_id;
+
+	struct virtchnl_version_info virtchnl_version;
+	struct virtchnl_vf_resource *vf_res; /* Resource to VF */
+	/* Pointer to array of queue pairs info. */
+	struct virtchnl_queue_pair_info *qps;
 	uint16_t nb_qps;
+	uint16_t nb_used_qps;
 	bool link_up;
+	struct virtchnl_eth_stats eth_stats; /* Stats to VF */
 	int cq_irqfd;
 	rte_atomic32_t irq_enable;
 
@@ -65,11 +85,23 @@ struct iavfbe_adapter {
 #define IAVFBE_DEV_PRIVATE_TO_ADAPTER(adapter) \
 	((struct iavfbe_adapter *)adapter)
 
+void iavfbe_reset_all_queues(struct iavfbe_adapter *adapter);
 int iavfbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+int iavfbe_lock_lanq(struct iavfbe_adapter *adapter);
+int iavfbe_unlock_lanq(struct iavfbe_adapter *adapter);
+void iavfbe_notify_vf_reset(struct iavfbe_adapter *adapter);
 void iavfbe_handle_virtchnl_msg(void *arg);
 void iavfbe_reset_asq(struct iavfbe_adapter *adapter, bool lock);
 void iavfbe_reset_arq(struct iavfbe_adapter *adapter, bool lock);
 
+static inline uint64_t stats_update(uint64_t offset, uint64_t stat)
+{
+	if (stat >= offset)
+		return (stat - offset);
+	else
+		return (uint64_t)(((uint64_t)-1) - offset + stat + 1);
+}
+
 extern int iavfbe_logtype;
 #define IAVF_BE_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, iavfbe_logtype, "%s(): " fmt "\n", \
diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 2ab66f889d..940ed66ce4 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -16,6 +16,7 @@
 #include <iavf_type.h>
 #include <virtchnl.h>
 #include "iavf_be.h"
+#include "iavf_be_rxtx.h"
 
 #define AVFBE_EDEV_ID_ARG "emu"
 #define AVFBE_MAC_ARG "mac"
@@ -46,6 +47,8 @@ static int iavfbe_dev_start(struct rte_eth_dev *dev);
 static int iavfbe_dev_stop(struct rte_eth_dev *dev);
 static int iavfbe_dev_info_get(struct rte_eth_dev *dev,
 				struct rte_eth_dev_info *dev_info);
+static int iavfbe_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
+static int iavfbe_stats_reset(struct rte_eth_dev *dev);
 static void iavfbe_destroy_adapter(struct rte_eth_dev *dev);
 
 struct rte_iavf_emu_notify_ops iavfbe_notify_ops = {
@@ -64,17 +67,80 @@ static const struct eth_dev_ops iavfbe_eth_dev_ops = {
 	.dev_start                  = iavfbe_dev_start,
 	.dev_stop                   = iavfbe_dev_stop,
 	.dev_infos_get              = iavfbe_dev_info_get,
+	.rx_queue_setup             = iavfbe_dev_rx_queue_setup,
+	.tx_queue_setup             = iavfbe_dev_tx_queue_setup,
+	.rx_queue_release           = iavfbe_dev_rx_queue_release,
+	.tx_queue_release           = iavfbe_dev_tx_queue_release,
+	.rxq_info_get               = iavfbe_dev_rxq_info_get,
+	.txq_info_get               = iavfbe_dev_txq_info_get,
 	.link_update                = iavfbe_dev_link_update,
+	.stats_get                  = iavfbe_stats_get,
+	.stats_reset                = iavfbe_stats_reset,
 };
 
 static int
-iavfbe_dev_info_get(struct rte_eth_dev *dev  __rte_unused,
-		    struct rte_eth_dev_info *dev_info)
+iavfbe_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
-	dev_info->max_rx_queues = 0;
-	dev_info->max_tx_queues = 0;
-	dev_info->min_rx_bufsize = 0;
-	dev_info->max_rx_pktlen = 0;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint64_t tx_pkts = 0;
+	uint64_t tx_bytes = 0;
+	uint64_t tx_missed = 0;
+	uint64_t rx_pkts = 0;
+	uint64_t rx_bytes = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		rx_pkts += stats_update(rxq->stats_off.recv_pkt_num,
+					rxq->stats.recv_pkt_num);
+		rx_bytes += stats_update(rxq->stats_off.recv_bytes,
+					 rxq->stats.recv_bytes);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		tx_pkts += stats_update(txq->stats_off.sent_pkt_num,
+					txq->stats.sent_pkt_num);
+		tx_bytes += stats_update(txq->stats_off.sent_bytes,
+					 txq->stats.sent_bytes);
+		tx_missed += stats_update(txq->stats_off.sent_miss_num,
+					  txq->stats.sent_miss_num);
+	}
+
+	stats->ipackets = rx_pkts;
+	stats->opackets = tx_pkts;
+	stats->oerrors = tx_missed;
+	stats->ibytes = rx_bytes;
+	stats->obytes = tx_bytes;
+
+	return 0;
+}
+
+static int
+iavfbe_stats_reset(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	unsigned i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		rxq->stats_off = rxq->stats;
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		txq->stats_off = txq->stats;
+	}
 
 	return 0;
 }
@@ -86,6 +152,84 @@ iavfbe_dev_configure(struct rte_eth_dev *dev __rte_unused)
 	return 0;
 }
 
+static int
+iavfbe_start_queues(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint32_t i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq || rte_atomic32_read(&txq->enable) != 0)
+			continue;
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq || rte_atomic32_read(&rxq->enable) != 0)
+			continue;
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return 0;
+}
+
+static void
+iavfbe_stop_queues(struct rte_eth_dev *dev)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
+static int
+iavfbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	dev_info->max_rx_queues = adapter->nb_qps;
+	dev_info->max_tx_queues = adapter->nb_qps;
+	dev_info->min_rx_bufsize = IAVF_BE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = IAVF_BE_FRAME_SIZE_MAX;
+	dev_info->max_mac_addrs = IAVF_BE_NUM_MACADDR_MAX;
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = IAVF_BE_MAX_RING_DESC,
+		.nb_min = IAVF_BE_MIN_RING_DESC,
+		.nb_align = IAVF_BE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = IAVF_BE_MAX_RING_DESC,
+		.nb_min = IAVF_BE_MIN_RING_DESC,
+		.nb_align = IAVF_BE_ALIGN_RING_DESC,
+	};
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	return 0;
+}
+
 static int
 iavfbe_dev_start(struct rte_eth_dev *dev)
 {
@@ -94,6 +238,8 @@ iavfbe_dev_start(struct rte_eth_dev *dev)
 
 	adapter->adapter_stopped = 0;
 
+	iavfbe_start_queues(dev);
+
 	return 0;
 }
 
@@ -106,6 +252,8 @@ iavfbe_dev_stop(struct rte_eth_dev *dev)
 	if (adapter->adapter_stopped == 1)
 		return 0;
 
+	iavfbe_stop_queues(dev);
+
 	adapter->adapter_stopped = 1;
 
 	return 0;
@@ -133,6 +281,13 @@ iavfbe_dev_link_update(struct rte_eth_dev *dev,
 static int
 iavfbe_dev_close(struct rte_eth_dev *dev)
 {
+	struct iavfbe_adapter *adapter =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+
+	/* Only send event when the emudev is alive */
+	if (adapter->started & adapter->cq_info.arq.len)
+		iavfbe_notify_vf_reset(adapter);
+
 	iavfbe_destroy_adapter(dev);
 	rte_eth_dev_release_port(dev);
 
@@ -199,7 +354,8 @@ iavfbe_new_device(struct rte_emudev *dev)
 		}
 	}
 
-	/* Lan queue info would be set when queue setup */
+	/* Only reset Lan queue if already setup, other info would be set when queue setup */
+	iavfbe_reset_all_queues(adapter);
 
 	if (rte_emudev_get_attr(dev->dev_id, RTE_IAVF_EMU_ATTR_ASQ_HEAD,
 		(rte_emudev_attr_t)&addr)) {
@@ -236,8 +392,28 @@ iavfbe_destroy_device(struct rte_emudev *dev)
 {
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_eth_dev_data *data = adapter->eth_dev->data;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
 
-	/* TODO: Disable all lan queues */
+	/* Disable all queues */
+	for (i = 0; i < data->nb_rx_queues; i++) {
+		rxq = data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_atomic32_set(&rxq->enable, false);
+		rxq->q_set = false;
+	}
+
+	for (i = 0; i < data->nb_tx_queues; i++) {
+		txq = data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_atomic32_set(&txq->enable, false);
+		txq->q_set = false;
+	}
+	adapter->started = 0;
 
 	/* update link status */
 	adapter->link_up = false;
@@ -249,9 +425,13 @@ iavfbe_update_device(struct rte_emudev *dev)
 {
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
+	struct rte_eth_dev_data *data = adapter->eth_dev->data;
 	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
 	struct rte_emudev_q_info q_info;
 	struct rte_emudev_irq_info irq_info;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
 
 	if (rte_emudev_get_mem_table(dev->dev_id, (void **)mem)) {
 		IAVF_BE_LOG(ERR, "Can not get mem table\n");
@@ -271,10 +451,87 @@ iavfbe_update_device(struct rte_emudev *dev)
 		return -1;
 	}
 
-	/* TODO: Lan queue info update */
 	adapter->cq_irqfd = irq_info.eventfd;
 	rte_atomic32_set(&adapter->irq_enable, irq_info.enable);
 
+	for (i = 0; i < data->nb_rx_queues; i++) {
+		rxq = data->rx_queues[i];
+		if (!rxq || rxq->vector == -1)
+			continue;
+
+		if (rte_emudev_get_irq_info(dev->dev_id,
+			rxq->vector, &irq_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get irq info of rxq %d\n", i);
+			return -1;
+		}
+		rte_atomic32_set(&rxq->irq_enable, irq_info.enable);
+	}
+
+	for (i = 0; i < data->nb_tx_queues; i++) {
+		txq = data->tx_queues[i];
+		if (!txq || txq->vector == -1)
+			continue;
+
+		if (rte_emudev_get_irq_info(dev->dev_id,
+			txq->vector, &irq_info)) {
+			IAVF_BE_LOG(ERR,
+				"Can not get irq info of txq %d\n", i);
+			return -1;
+		}
+		rte_atomic32_set(&txq->irq_enable, irq_info.enable);
+	}
+
+	return 0;
+}
+
+int
+iavfbe_lock_lanq(struct iavfbe_adapter *adapter)
+{
+	struct rte_eth_dev *eth_dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_spinlock_lock(&rxq->access_lock);
+	}
+
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		txq = eth_dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_spinlock_lock(&txq->access_lock);
+	}
+
+	return 0;
+}
+
+int
+iavfbe_unlock_lanq(struct iavfbe_adapter *adapter)
+{
+	struct rte_eth_dev *eth_dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rte_spinlock_unlock(&rxq->access_lock);
+	}
+
+	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+		txq = eth_dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		rte_spinlock_unlock(&txq->access_lock);
+	}
+
 	return 0;
 }
 
@@ -287,11 +544,11 @@ iavfbe_lock_dp(struct rte_emudev *dev, int lock)
 	/* Acquire/Release lock of control queue and lan queue */
 
 	if (lock) {
-		/* TODO: Lan queue lock */
+		iavfbe_lock_lanq(adapter);
 		rte_spinlock_lock(&adapter->cq_info.asq.access_lock);
 		rte_spinlock_lock(&adapter->cq_info.arq.access_lock);
 	} else {
-		/* TODO: Lan queue unlock */
+		iavfbe_unlock_lanq(adapter);
 		rte_spinlock_unlock(&adapter->cq_info.asq.access_lock);
 		rte_spinlock_unlock(&adapter->cq_info.arq.access_lock);
 	}
@@ -358,11 +615,16 @@ iavfbe_reset_device(struct rte_emudev *dev)
 	struct iavfbe_adapter *adapter =
 		(struct iavfbe_adapter *)dev->backend_priv;
 
+	iavfbe_notify_vf_reset(adapter);
+
 	/* Lock has been acquired by lock_dp */
-	/* TODO: reset all queues */
+	iavfbe_reset_all_queues(adapter);
 	iavfbe_reset_asq(adapter, false);
 	iavfbe_reset_arq(adapter, false);
 
+	memset(adapter->qps, 0, sizeof(struct virtchnl_queue_pair_info));
+	memset(&adapter->eth_stats, 0, sizeof(struct virtchnl_eth_stats));
+	adapter->nb_used_qps = 0;
 	adapter->link_up = 0;
 	adapter->unicast_promisc = true;
 	adapter->multicast_promisc = true;
@@ -433,13 +695,14 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 {
 	struct iavfbe_adapter *adapter;
 	struct rte_iavf_emu_config *conf;
-	int ret;
+	int bufsz, ret;
 
 	adapter = IAVFBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
 
 	adapter->eth_dev = eth_dev;
 	adapter->emu_dev = emu_dev;
 	adapter->edev_id = emu_dev->dev_id;
+	adapter->cq_irqfd = IAVF_BE_INVALID_FD;
 	emu_dev->backend_priv = (void *)adapter;
 	rte_wmb();
 
@@ -472,6 +735,48 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 	rte_spinlock_init(&adapter->cq_info.asq.access_lock);
 	rte_spinlock_init(&adapter->cq_info.arq.access_lock);
 
+	/* Set VF Backend defaults during initialization */
+	adapter->virtchnl_version.major = VIRTCHNL_VERSION_MAJOR;
+	adapter->virtchnl_version.minor = VIRTCHNL_VERSION_MINOR;
+
+	bufsz = sizeof(struct virtchnl_vf_resource) +
+		(IAVF_BE_DEFAULT_VSI_NUM *
+		 sizeof(struct virtchnl_vsi_resource));
+	adapter->vf_res = rte_zmalloc_socket("iavfbe", bufsz, 0,
+					     eth_dev->device->numa_node);
+	if (!adapter->vf_res) {
+		IAVF_BE_LOG(ERR, "Fail to allocate vf_res memory");
+		ret = -ENOMEM;
+		goto err_res;
+	}
+
+	adapter->vf_res->num_vsis = IAVF_BE_DEFAULT_VSI_NUM;
+	adapter->vf_res->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2 |
+					VIRTCHNL_VF_OFFLOAD_VLAN |
+					VIRTCHNL_VF_OFFLOAD_WB_ON_ITR |
+					VIRTCHNL_VF_OFFLOAD_RX_POLLING;
+	adapter->vf_res->max_vectors = IAVF_BE_MAX_VECTORS;
+	adapter->vf_res->num_queue_pairs = adapter->nb_qps;
+	adapter->vf_res->max_mtu = AVF_DEFAULT_MAX_MTU;
+	/* Make vsi_id change with diffient emu device */
+	adapter->vf_res->vsi_res[0].vsi_id = emu_dev->dev_id;
+	adapter->vf_res->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV;
+	adapter->vf_res->vsi_res[0].num_queue_pairs = adapter->nb_qps;
+	rte_ether_addr_copy(ether_addr,
+		(struct rte_ether_addr *)
+		adapter->vf_res->vsi_res[0].default_mac_addr);
+
+	adapter->qps =
+		rte_zmalloc_socket("iavfbe",
+				   adapter->nb_qps * sizeof(adapter->qps[0]),
+				   0,
+				   eth_dev->device->numa_node);
+	if (!adapter->qps) {
+		IAVF_BE_LOG(ERR, "fail to allocate memeory for queue info");
+		ret = -ENOMEM;
+		goto err_qps;
+	}
+
 	adapter->unicast_promisc = true;
 	adapter->multicast_promisc = true;
 	adapter->vlan_filter = false;
@@ -494,6 +799,11 @@ iavfbe_init_adapter(struct rte_eth_dev *eth_dev,
 	return 0;
 
 err_thread:
+	rte_free(adapter->qps);
+err_qps:
+	rte_free(adapter->vf_res);
+err_res:
+	rte_free(adapter->cq_info.asq.aq_req);
 err_aq:
 err_info:
 	rte_free(conf);
@@ -513,6 +823,9 @@ iavfbe_destroy_adapter(struct rte_eth_dev *dev)
 	}
 
 	rte_free(adapter->dev_info.dev_priv);
+	rte_free(adapter->cq_info.asq.aq_req);
+	rte_free(adapter->vf_res);
+	rte_free(adapter->qps);
 }
 
 static int
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.c b/drivers/net/iavf_be/iavf_be_rxtx.c
new file mode 100644
index 0000000000..dd275b80c6
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_rxtx.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+#include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_iavf_emu.h>
+
+#include <iavf_type.h>
+#include <virtchnl.h>
+#include "iavf_be.h"
+#include "iavf_be_rxtx.h"
+
+int
+iavfbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			  uint16_t nb_desc __rte_unused,
+			  unsigned int socket_id,
+			  const struct rte_eth_rxconf *rx_conf __rte_unused,
+			  struct rte_mempool *mp)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_rx_queue *rxq;
+	uint16_t len;
+
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx]) {
+		iavfbe_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("iavfbe rxq",
+				 sizeof(struct iavfbe_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!rxq) {
+		IAVF_BE_LOG(ERR, "Failed to allocate memory for "
+				 "rx queue data structure");
+		return -ENOMEM;
+	}
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = 0; /* Update when queue from fe is ready */
+	rxq->queue_id = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_hdr_len = 0;
+	rxq->vector = IAVF_BE_INVALID_VECTOR;
+	rxq->kickfd = IAVF_BE_INVALID_FD;
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT));
+
+	/* More ring info will be gotten in virtchnl msg */
+
+	rxq->adapter = (void *)ad;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+}
+
+int
+iavfbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			  uint16_t nb_desc __rte_unused,
+			  unsigned int socket_id,
+			  const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct iavfbe_adapter *ad =
+		IAVFBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct iavfbe_tx_queue *txq;
+
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx]) {
+		iavfbe_dev_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("iavfbe txq",
+				 sizeof(struct iavfbe_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (!txq) {
+		IAVF_BE_LOG(ERR, "Failed to allocate memory for "
+				 "tx queue structure");
+		return -ENOMEM;
+	}
+
+	txq->queue_id = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->vector = IAVF_BE_INVALID_VECTOR;
+	txq->callfd = IAVF_BE_INVALID_FD;
+
+	/* More ring info will be gotten in virtchnl msg */
+
+	txq->adapter = (void *)ad;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
+void
+iavfbe_dev_rx_queue_release(void *rxq)
+{
+	struct iavfbe_rx_queue *q = (struct iavfbe_rx_queue *)rxq;
+
+	if (!q)
+		return;
+	rte_free(q);
+}
+
+void
+iavfbe_dev_tx_queue_release(void *txq)
+{
+	struct iavfbe_tx_queue *q = (struct iavfbe_tx_queue *)txq;
+
+	if (!q)
+		return;
+	rte_free(q);
+}
+
+void
+iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
+{
+	struct iavfbe_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+	if (!rxq)
+		return;
+
+	qinfo->mp = rxq->mp;
+	qinfo->scattered_rx = true;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = 0;
+	qinfo->conf.rx_drop_en = false;
+	qinfo->conf.rx_deferred_start = false;
+}
+
+void
+iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
+{
+	struct iavfbe_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	if (!txq)
+		return;
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_free_thresh = 0;
+	qinfo->conf.tx_rs_thresh = 0;
+	qinfo->conf.offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	qinfo->conf.tx_deferred_start = false;
+}
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.h b/drivers/net/iavf_be/iavf_be_rxtx.h
new file mode 100644
index 0000000000..cc72769337
--- /dev/null
+++ b/drivers/net/iavf_be/iavf_be_rxtx.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#ifndef _AVF_BE_RXTX_H_
+#define _AVF_BE_RXTX_H_
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define IAVF_BE_ALIGN_RING_DESC      32
+#define IAVF_BE_MIN_RING_DESC        64
+#define IAVF_BE_MAX_RING_DESC        4096
+
+#define AVF_RXQ_CTX_DBUFF_SHIFT 7
+#define AVF_RXQ_CTX_HBUFF_SHIFT 6
+
+#define AVF_RX_MAX_SEG           5
+#define IAVF_BE_INVALID_FD      -1
+#define IAVF_BE_INVALID_VECTOR  -1
+
+#define iavf_rx_desc iavf_32byte_rx_desc
+
+/* Structure associated with each Rx queue in AVF_BE. */
+struct iavfbe_rx_queue {
+	rte_spinlock_t access_lock;
+	struct rte_mempool *mp;       /* mbuf pool to populate Rx ring */
+	volatile struct iavf_tx_desc *tx_ring; /* AVF Tx ring virtual address */
+	uint64_t tx_ring_phys_addr;   /* AVF Tx ring DMA address */
+	uint16_t nb_rx_desc;          /* ring length */
+	volatile uint8_t *qtx_tail;   /* register address of tail */
+
+	uint16_t tx_head;
+	int vector;
+	int kickfd;
+	rte_atomic32_t irq_enable;
+
+	uint16_t port_id;       /* device port ID */
+	uint8_t crc_len;        /* 0 if CRC stripped, 4 otherwise */
+	uint16_t queue_id;      /* Rx queue index */
+	uint16_t rx_buf_len;    /* The packet buffer size */
+	uint16_t rx_hdr_len;    /* The header buffer size */
+	uint16_t max_pkt_len;   /* Maximum packet length */
+	bool q_set;             /* If queue has been set by virtchnl */
+	rte_atomic32_t enable;  /* If queue has been enabled set by virtchnl */
+
+	struct iavfbe_adapter *adapter; /* Point to adapter the tx queue belong to */
+	struct {
+		uint64_t recv_pkt_num;
+		uint64_t recv_bytes;
+		uint64_t recv_miss_num;
+		uint64_t recv_multi_num;
+		uint64_t recv_broad_num;
+	} stats, stats_off;   /* Stats information */
+};
+
+/* Structure associated with each TX queue. */
+struct iavfbe_tx_queue {
+	rte_spinlock_t access_lock;
+	volatile union iavf_rx_desc *rx_ring; /* AVF Rx ring virtual address */
+	uint64_t rx_ring_phys_addr;    /* Rx ring DMA address */
+	uint16_t nb_tx_desc;           /* ring length */
+	volatile uint8_t *qrx_tail;    /* tail address of fe's rx ring */
+	uint32_t buffer_size;          /* max buffer size of fe's rx ring */
+	uint32_t max_pkt_size;         /* max buffer size of fe's rx ring */
+
+	uint16_t rx_head;
+	int vector;
+	int callfd;
+	rte_atomic32_t irq_enable;
+
+	uint16_t port_id;
+	uint16_t queue_id;
+
+	bool q_set;             /* If queue has been set by virtchnl */
+	rte_atomic32_t enable;  /* If queue has been enabled set by virtchnl */
+
+	struct iavfbe_adapter *adapter; /* Point to adapter the tx queue belong to */
+	struct {
+		uint64_t sent_pkt_num;
+		uint64_t sent_bytes;
+		uint64_t sent_miss_num;
+		uint64_t sent_multi_num;
+		uint64_t sent_broad_num;
+	} stats, stats_off;   /* Stats information */
+};
+
+
+int iavfbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			      uint16_t queue_idx,
+			      uint16_t nb_desc,
+			      unsigned int socket_id,
+			      const struct rte_eth_rxconf *rx_conf,
+			      struct rte_mempool *mp);
+void iavfbe_dev_rx_queue_release(void *rxq);
+int iavfbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			      uint16_t queue_idx,
+			      uint16_t nb_desc,
+			      unsigned int socket_id,
+			      const struct rte_eth_txconf *tx_conf);
+void iavfbe_dev_tx_queue_release(void *txq);
+void iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			     struct rte_eth_rxq_info *qinfo);
+void iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+			     struct rte_eth_txq_info *qinfo);
+
+#endif /* _AVF_BE_RXTX_H_ */
diff --git a/drivers/net/iavf_be/iavf_be_vchnl.c b/drivers/net/iavf_be/iavf_be_vchnl.c
index 56b8a485a5..2195047280 100644
--- a/drivers/net/iavf_be/iavf_be_vchnl.c
+++ b/drivers/net/iavf_be/iavf_be_vchnl.c
@@ -21,6 +21,7 @@
 #include <virtchnl.h>
 
 #include "iavf_be.h"
+#include "iavf_be_rxtx.h"
 
 static inline void
 iavfbe_notify(struct iavfbe_adapter *adapter)
@@ -34,7 +35,92 @@ iavfbe_notify(struct iavfbe_adapter *adapter)
 					strerror(errno));
 }
 
-__rte_unused  static int
+static inline void
+reset_rxq_stats(struct iavfbe_rx_queue *rxq)
+{
+	rxq->stats.recv_pkt_num = 0;
+	rxq->stats.recv_bytes = 0;
+	rxq->stats.recv_miss_num = 0;
+	rxq->stats.recv_multi_num = 0;
+	rxq->stats.recv_broad_num = 0;
+
+	rxq->stats_off.recv_pkt_num = 0;
+	rxq->stats_off.recv_bytes = 0;
+	rxq->stats_off.recv_miss_num = 0;
+	rxq->stats_off.recv_multi_num = 0;
+	rxq->stats_off.recv_broad_num = 0;
+}
+
+static inline void
+reset_txq_stats(struct iavfbe_tx_queue *txq)
+{
+	txq->stats.sent_pkt_num = 0;
+	txq->stats.sent_bytes = 0;
+	txq->stats.sent_miss_num = 0;
+	txq->stats.sent_multi_num = 0;
+	txq->stats.sent_broad_num = 0;
+}
+
+void
+iavfbe_reset_all_queues(struct iavfbe_adapter *adapter)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t i;
+
+	/* Disable queues and mark them unset */
+	for (i = 0; i < adapter->eth_dev->data->nb_rx_queues; i++) {
+		rxq = adapter->eth_dev->data->rx_queues[i];
+		if (rxq) {
+			rte_atomic32_set(&rxq->enable, false);
+			rxq->q_set = false;
+			rxq->tx_head = 0;
+			reset_rxq_stats(rxq);
+		}
+	}
+
+	for (i = 0; i < adapter->eth_dev->data->nb_tx_queues; i++) {
+		txq = adapter->eth_dev->data->tx_queues[i];
+		if (txq) {
+			rte_atomic32_set(&txq->enable, false);
+			txq->q_set = false;
+			txq->rx_head = 0;
+			reset_txq_stats(txq);
+		}
+	}
+}
+
+static enum iavf_status
+apply_tx_irq(struct iavfbe_tx_queue *txq, uint16_t vector)
+{
+	struct rte_emudev_irq_info info;
+
+	txq->vector = vector;
+	if (rte_emudev_get_irq_info(txq->adapter->edev_id, vector, &info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info\n");
+		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+	txq->callfd = info.eventfd;
+
+	return 0;
+}
+
+static enum iavf_status
+apply_rx_irq(struct iavfbe_rx_queue *rxq, uint16_t vector)
+{
+	struct rte_emudev_irq_info info;
+
+	rxq->vector = vector;
+	if (rte_emudev_get_irq_info(rxq->adapter->edev_id, vector, &info)) {
+		IAVF_BE_LOG(ERR, "Can not get irq info\n");
+		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
+	}
+	rxq->kickfd = info.eventfd;
+
+	return 0;
+}
+
+static int
 iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
 			uint32_t opcode,
 			uint32_t retval,
@@ -107,6 +193,459 @@ iavfbe_send_msg_to_vf(struct iavfbe_adapter *adapter,
 	return status;
 }
 
+static void
+iavfbe_process_cmd_version(struct iavfbe_adapter *adapter,
+				uint8_t *msg)
+{
+	struct virtchnl_version_info *info =
+		(struct virtchnl_version_info *)msg;
+
+	/* Only support V1.1 */
+	if (adapter->virtchnl_version.major == info->major &&
+	    adapter->virtchnl_version.minor == info->minor)
+		iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_VERSION,
+				      VIRTCHNL_STATUS_SUCCESS,
+				      (uint8_t *)&adapter->virtchnl_version,
+				      sizeof(adapter->virtchnl_version));
+	else
+		iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_VERSION,
+				      VIRTCHNL_STATUS_NOT_SUPPORTED,
+				      NULL, 0);
+}
+
+static int
+iavfbe_renew_device_info(struct iavfbe_adapter *adapter)
+{
+	struct rte_iavf_emu_mem **mem = &(adapter->mem_table);
+	uint64_t addr;
+
+	if (rte_emudev_get_mem_table(adapter->edev_id, (void **)mem)) {
+		IAVF_BE_LOG(ERR, "Can not get mem table\n");
+		return -1;
+	}
+
+	if (rte_emudev_get_attr(adapter->edev_id, RTE_IAVF_EMU_ATTR_RESET,
+		(rte_emudev_attr_t)&addr)) {
+		IAVF_BE_LOG(ERR, "Can not get arq head\n");
+		return -1;
+	}
+	adapter->reset = (uint8_t *)(uintptr_t)addr;
+
+	IAVF_BE_LOG(DEBUG, "DEVICE memtable re-acquired, %p\n",
+		    adapter->mem_table);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_reset_vf(struct iavfbe_adapter *adapter)
+{
+	adapter->started = 0;
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_IN_PROGRESS);
+
+	iavfbe_lock_lanq(adapter);
+	iavfbe_reset_all_queues(adapter);
+	iavfbe_unlock_lanq(adapter);
+
+	memset(adapter->qps, 0, sizeof(struct virtchnl_queue_pair_info));
+	memset(&adapter->eth_stats, 0, sizeof(struct virtchnl_eth_stats));
+	adapter->nb_used_qps = 0;
+	adapter->link_up = 0;
+	adapter->unicast_promisc = true;
+	adapter->multicast_promisc = true;
+	adapter->vlan_filter = false;
+	adapter->vlan_strip = false;
+	adapter->adapter_stopped = 1;
+
+	iavfbe_renew_device_info(adapter);
+	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_COMPLETED);
+	adapter->started = 1;
+
+	return IAVF_SUCCESS;
+}
+
+static int
+iavfbe_process_cmd_get_vf_resource(struct iavfbe_adapter *adapter,
+				uint8_t *msg)
+{
+	struct virtchnl_vf_resource vf_res;
+	uint32_t request_caps;
+	uint32_t len = 0;
+
+	len = sizeof(struct virtchnl_vf_resource) +
+		(adapter->vf_res->num_vsis - 1) *
+		sizeof(struct virtchnl_vsi_resource);
+
+	request_caps = *(uint32_t *)msg;
+
+	rte_memcpy(&vf_res, adapter->vf_res, len);
+	vf_res.vf_cap_flags = request_caps &
+				adapter->vf_res->vf_cap_flags;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_GET_VF_RESOURCES,
+			      VIRTCHNL_STATUS_SUCCESS, (uint8_t *)&vf_res, len);
+
+	return IAVF_SUCCESS;
+}
+
+static int
+iavfbe_process_cmd_config_vsi_queues(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	struct virtchnl_vsi_queue_config_info *vc_vqci =
+		(struct virtchnl_vsi_queue_config_info *)msg;
+	struct virtchnl_queue_pair_info *vc_qpi;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	uint16_t nb_qps, queue_id;
+	int i, ret = VIRTCHNL_STATUS_SUCCESS;
+
+	if (!msg) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	nb_qps = vc_vqci->num_queue_pairs;
+	vc_qpi = vc_vqci->qpair;
+
+	/* Check valid */
+	if (nb_qps > adapter->nb_qps ||
+	    nb_qps > dev->data->nb_rx_queues ||
+	    nb_qps > dev->data->nb_tx_queues) {
+		IAVF_BE_LOG(ERR, "number of queue pairs (%u) exceeds"
+			    " (max: %u, rxq: %u, txq: %u)", nb_qps,
+			    adapter->nb_qps, dev->data->nb_rx_queues,
+			    dev->data->nb_tx_queues);
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < nb_qps; i++) {
+		if (vc_qpi[i].txq.vsi_id != vc_vqci->vsi_id ||
+		    vc_qpi[i].rxq.vsi_id != vc_vqci->vsi_id ||
+		    vc_qpi[i].rxq.queue_id != vc_qpi[i].txq.queue_id ||
+		    vc_qpi[i].rxq.queue_id > adapter->nb_qps - 1 ||
+		    vc_qpi[i].rxq.ring_len > IAVF_BE_MAX_RING_DESC ||
+		    vc_qpi[i].txq.ring_len > IAVF_BE_MAX_RING_DESC ||
+		    vc_vqci->vsi_id != adapter->vf_res->vsi_res[0].vsi_id) {
+			ret = VIRTCHNL_STATUS_ERR_PARAM;
+			goto send_msg;
+		}
+	}
+
+	/* Store queues info internally */
+	adapter->nb_used_qps = nb_qps;
+	rte_memcpy(adapter->qps, &vc_vqci->qpair,
+		   nb_qps * sizeof(adapter->qps[0]));
+
+	for (i = 0; i < nb_qps; i++) {
+		struct rte_emudev_db_info db_info;
+
+		queue_id = adapter->qps[i].rxq.queue_id;
+		rxq = dev->data->rx_queues[queue_id];
+		txq = dev->data->tx_queues[queue_id];
+		if (!rxq || !txq) {
+			IAVF_BE_LOG(ERR, "Queue Pair %u "
+				    " hasn't been setup", rxq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+
+		memset(&db_info, 0, sizeof(db_info));
+		ret = rte_emudev_get_db_info(adapter->edev_id,
+					  i * 2 + RTE_IAVF_EMU_ADMINQ_NUM,
+					  &db_info);
+		if (ret || (db_info.flag & RTE_EMUDEV_DB_MEM) != RTE_EMUDEV_DB_MEM) {
+			IAVF_BE_LOG(ERR, "Fail to get Door Bell of RXQ %u",
+				    rxq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+
+		rte_spinlock_lock(&rxq->access_lock);
+		/* Configure Rx Queue */
+		rxq->nb_rx_desc = vc_qpi[i].txq.ring_len;
+		rxq->tx_ring_phys_addr = vc_qpi[i].txq.dma_ring_addr;
+		rxq->max_pkt_len = vc_qpi[i].rxq.max_pkt_size;
+		rxq->qtx_tail = (uint8_t *)db_info.data.mem.base;
+		/* Reset stats */
+		reset_rxq_stats(rxq);
+		rxq->tx_head = 0;
+		rxq->q_set = true;
+		rte_spinlock_unlock(&rxq->access_lock);
+
+		memset(&db_info, 0, sizeof(db_info));
+		ret = rte_emudev_get_db_info(adapter->edev_id,
+					  i * 2 + RTE_IAVF_EMU_ADMINQ_NUM + 1,
+					  &db_info);
+		if (ret || (db_info.flag & RTE_EMUDEV_DB_MEM) != RTE_EMUDEV_DB_MEM) {
+			IAVF_BE_LOG(ERR, "Fail to get Door Bell of TXQ %u",
+				    txq->queue_id);
+			ret = VIRTCHNL_STATUS_NOT_SUPPORTED;
+			goto send_msg;
+		}
+		rte_spinlock_lock(&txq->access_lock);
+		/* Configure Tx Queue */
+		txq->nb_tx_desc = vc_qpi[i].rxq.ring_len;
+		txq->rx_ring_phys_addr = vc_qpi[i].rxq.dma_ring_addr;
+		txq->buffer_size = vc_qpi[i].rxq.databuffer_size;
+		txq->max_pkt_size = vc_qpi[i].rxq.max_pkt_size;
+		txq->qrx_tail = (uint8_t *)db_info.data.mem.base;
+		/* Reset stats */
+		reset_txq_stats(txq);
+		txq->rx_head = 0;
+		txq->q_set = true;
+		rte_spinlock_unlock(&txq->access_lock);
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_VSI_QUEUES,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_enable_queues(struct iavfbe_adapter *adapter,
+				 uint8_t *msg,
+				 uint16_t msglen __rte_unused)
+{
+	struct virtchnl_queue_select *q_sel =
+		(struct virtchnl_queue_select *)msg;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i, ret = VIRTCHNL_STATUS_SUCCESS;
+
+	if (!msg) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < adapter->nb_used_qps; i++) {
+		uint64_t len;
+
+		rxq = dev->data->rx_queues[i];
+		txq = dev->data->tx_queues[i];
+		if (!rxq || !txq) {
+			IAVF_BE_LOG(ERR, "Queue Pair %u "
+				    " hasn't been setup", rxq->queue_id);
+			ret = IAVF_ERR_DEVICE_NOT_SUPPORTED;
+			goto send_msg;
+		}
+		if (q_sel->tx_queues & (1 << i)) {
+			if (!rxq->q_set) {
+				IAVF_BE_LOG(ERR, "RXQ %u hasn't been setup", i);
+				ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+				goto send_msg;
+			}
+			rte_spinlock_lock(&rxq->access_lock);
+			len = rxq->nb_rx_desc * sizeof(struct iavf_tx_desc);
+			rxq->tx_ring = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+						adapter->mem_table,
+						rxq->tx_ring_phys_addr,
+						&len);
+			rte_atomic32_set(&rxq->enable, true);
+			rte_spinlock_unlock(&rxq->access_lock);
+
+		}
+		if (q_sel->rx_queues & (1 << i)) {
+			if (!txq->q_set) {
+				IAVF_BE_LOG(ERR, "TXQ %u hasn't been setup", i);
+				ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+				goto send_msg;
+			}
+			rte_spinlock_lock(&txq->access_lock);
+			len = txq->nb_tx_desc * sizeof(union iavf_32byte_rx_desc);
+			txq->rx_ring = (void *)(uintptr_t)
+				rte_iavf_emu_get_dma_vaddr(adapter->mem_table,
+						       txq->rx_ring_phys_addr,
+						       &len);
+			rte_atomic32_set(&txq->enable, true);
+			rte_spinlock_unlock(&txq->access_lock);
+
+		}
+	}
+
+	/* Set link UP after queues are enabled */
+	adapter->link_up = true;
+	iavfbe_dev_link_update(adapter->eth_dev, 0);
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ENABLE_QUEUES, ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_disable_queues(struct iavfbe_adapter *adapter,
+				  uint8_t *msg,
+				  uint16_t msglen __rte_unused)
+{
+	struct virtchnl_queue_select *q_sel =
+		(struct virtchnl_queue_select *)msg;
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	uint16_t i;
+
+	if (!msg) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < adapter->nb_used_qps; i++) {
+		rxq = dev->data->rx_queues[i];
+		txq = dev->data->tx_queues[i];
+
+		if (q_sel->tx_queues & (1 << i)) {
+			if (!rxq)
+				continue;
+			rte_spinlock_lock(&rxq->access_lock);
+			rte_atomic32_set(&rxq->enable, false);
+			rxq->tx_head = 0;
+			reset_rxq_stats(rxq);
+			rte_spinlock_unlock(&rxq->access_lock);
+		}
+		if (q_sel->rx_queues & (1 << i)) {
+			if (!txq)
+				continue;
+			rte_spinlock_lock(&txq->access_lock);
+			rte_atomic32_set(&txq->enable, false);
+			txq->rx_head = 0;
+			reset_txq_stats(txq);
+			rte_spinlock_unlock(&txq->access_lock);
+		}
+	}
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DISABLE_QUEUES,
+			      ret, NULL, 0);
+
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_config_irq_map(struct iavfbe_adapter *adapter,
+				  uint8_t *msg,
+				  uint16_t msglen __rte_unused)
+{
+	struct rte_eth_dev *dev = adapter->eth_dev;
+	struct iavfbe_tx_queue *txq;
+	struct iavfbe_rx_queue *rxq;
+	uint16_t i, j, vector_id;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+
+	struct virtchnl_irq_map_info *irqmap =
+		(struct virtchnl_irq_map_info *)msg;
+	struct virtchnl_vector_map *map;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	IAVF_BE_LOG(DEBUG, "irqmap->num_vectors = %d\n", irqmap->num_vectors);
+
+	for (i = 0; i < irqmap->num_vectors; i++) {
+		map = &irqmap->vecmap[i];
+		vector_id = map->vector_id;
+
+		for (j = 0; j < adapter->nb_used_qps; j++) {
+			rxq = dev->data->rx_queues[j];
+			txq = dev->data->tx_queues[j];
+
+			if ((1 << j) & map->rxq_map) {
+				txq->vector = vector_id;
+				ret = apply_tx_irq(txq, vector_id);
+				if (ret)
+					goto send_msg;
+			}
+			if ((1 << j) & map->txq_map) {
+				rxq->vector = vector_id;
+				ret = apply_rx_irq(rxq, vector_id);
+				if (ret)
+					goto send_msg;
+			}
+		}
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_IRQ_MAP,
+			      ret, NULL, 0);
+
+	return ret;
+}
+
+
+static int
+iavfbe_process_cmd_get_stats(struct iavfbe_adapter *adapter,
+				uint8_t *msg __rte_unused,
+				uint16_t msglen __rte_unused)
+{
+	struct iavfbe_rx_queue *rxq;
+	struct iavfbe_tx_queue *txq;
+	int i;
+
+	memset(&adapter->eth_stats, 0, sizeof(adapter->eth_stats));
+
+	for (i = 0; i < adapter->eth_dev->data->nb_rx_queues; i++) {
+		rxq = adapter->eth_dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+		adapter->eth_stats.tx_broadcast += rxq->stats.recv_broad_num;;
+		adapter->eth_stats.tx_bytes += rxq->stats.recv_bytes;
+		adapter->eth_stats.tx_discards += rxq->stats.recv_miss_num;
+		adapter->eth_stats.tx_multicast += rxq->stats.recv_multi_num;
+		adapter->eth_stats.tx_unicast += rxq->stats.recv_pkt_num -
+						rxq->stats.recv_broad_num -
+						rxq->stats.recv_multi_num;
+	}
+
+	for (i = 0; i < adapter->eth_dev->data->nb_tx_queues; i++) {
+		txq = adapter->eth_dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+		adapter->eth_stats.rx_broadcast += txq->stats.sent_broad_num;
+		adapter->eth_stats.rx_bytes += txq->stats.sent_bytes;
+		/* Dont add discards as recv count doesn't include this part */
+		adapter->eth_stats.rx_multicast += txq->stats.sent_multi_num;
+		adapter->eth_stats.rx_unicast += txq->stats.sent_pkt_num -
+						txq->stats.sent_broad_num -
+						txq->stats.sent_multi_num;
+	}
+
+	IAVF_BE_LOG(DEBUG, "rx_bytes:            %"PRIu64"",
+					adapter->eth_stats.tx_bytes);
+	IAVF_BE_LOG(DEBUG, "rx_unicast:          %"PRIu64"",
+					adapter->eth_stats.tx_unicast);
+	IAVF_BE_LOG(DEBUG, "rx_multicast:        %"PRIu64"",
+					adapter->eth_stats.tx_multicast);
+	IAVF_BE_LOG(DEBUG, "rx_broadcast:        %"PRIu64"",
+					adapter->eth_stats.tx_broadcast);
+	IAVF_BE_LOG(DEBUG, "rx_discards:         %"PRIu64"",
+					adapter->eth_stats.tx_discards);
+
+	IAVF_BE_LOG(DEBUG, "tx_bytes:            %"PRIu64"",
+					adapter->eth_stats.rx_bytes);
+	IAVF_BE_LOG(DEBUG, "tx_unicast:          %"PRIu64"",
+					adapter->eth_stats.rx_unicast);
+	IAVF_BE_LOG(DEBUG, "tx_multicast:        %"PRIu64"",
+					adapter->eth_stats.rx_multicast);
+	IAVF_BE_LOG(DEBUG, "tx_broadcast:        %"PRIu64"",
+					adapter->eth_stats.rx_broadcast);
+	IAVF_BE_LOG(DEBUG, "tx_discards:         %"PRIu64"",
+					adapter->eth_stats.rx_discards);
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_GET_STATS,
+			      VIRTCHNL_STATUS_SUCCESS,
+			      (uint8_t *)&adapter->eth_stats,
+			      sizeof(struct virtchnl_eth_stats));
+
+	return IAVF_SUCCESS;
+}
+
 /* Read data in admin queue to get msg from vf driver */
 static enum iavf_status
 iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
@@ -180,6 +719,289 @@ iavfbe_read_msg_from_vf(struct iavfbe_adapter *adapter,
 	return ret;
 }
 
+static void
+iavfbe_notify_vf_link_status(struct iavfbe_adapter *adapter)
+{
+	struct virtchnl_pf_event event;
+
+	event.severity = PF_EVENT_SEVERITY_INFO;
+	event.event = VIRTCHNL_EVENT_LINK_CHANGE;
+	event.event_data.link_event.link_status = adapter->link_up ? 1 : 0;
+	event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_UNKNOWN;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_EVENT,
+				IAVF_SUCCESS, (uint8_t *)&event, sizeof(event));
+}
+
+void
+iavfbe_notify_vf_reset(struct iavfbe_adapter *adapter)
+{
+	struct virtchnl_pf_event event;
+
+	event.severity = PF_EVENT_SEVERITY_CERTAIN_DOOM;
+	event.event = VIRTCHNL_EVENT_RESET_IMPENDING;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_EVENT,
+				IAVF_SUCCESS, (uint8_t *)&event, sizeof(event));
+}
+
+static int
+iavfbe_process_cmd_enable_vlan_strip(struct iavfbe_adapter *adapter)
+{
+	adapter->vlan_strip = true;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
+			      VIRTCHNL_STATUS_SUCCESS, NULL, 0);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_disable_vlan_strip(struct iavfbe_adapter *adapter)
+{
+	adapter->vlan_strip = false;
+
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
+			      VIRTCHNL_STATUS_SUCCESS, NULL, 0);
+
+	return 0;
+}
+
+static int
+iavfbe_process_cmd_config_promisc_mode(struct iavfbe_adapter *adapter,
+				uint8_t *msg,
+				uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_promisc_info *promisc =
+		(struct virtchnl_promisc_info *)msg;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	adapter->unicast_promisc =
+		(promisc->flags & FLAG_VF_UNICAST_PROMISC) ? true : false;
+	adapter->multicast_promisc =
+		(promisc->flags & FLAG_VF_MULTICAST_PROMISC) ? true : false;
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_add_ether_address(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	struct virtchnl_ether_addr_list *addr_list =
+		(struct virtchnl_ether_addr_list *)msg;
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+
+	for (i = 0; i < addr_list->num_elements; i++) {
+
+		/* TODO: mac filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ADD_ETH_ADDR,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_del_ether_address(struct iavfbe_adapter *adapter,
+				     uint8_t *msg,
+				     uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_ether_addr_list *addr_list =
+		(struct virtchnl_ether_addr_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < addr_list->num_elements; i++) {
+
+		/* TODO: mac filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DEL_ETH_ADDR,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_add_vlan(struct iavfbe_adapter *adapter,
+			    uint8_t *msg, uint16_t msglen __rte_unused)
+{
+	int ret = VIRTCHNL_STATUS_SUCCESS;
+	struct virtchnl_vlan_filter_list *vlan_list =
+		(struct virtchnl_vlan_filter_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < vlan_list->num_elements; i++) {
+
+		/* TODO: vlan filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_ADD_VLAN,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static int
+iavfbe_process_cmd_del_vlan(struct iavfbe_adapter *adapter,
+			    uint8_t *msg,
+			    uint16_t msglen __rte_unused)
+{
+	int ret = IAVF_SUCCESS;
+	struct virtchnl_vlan_filter_list *vlan_list =
+		(struct virtchnl_vlan_filter_list *)msg;
+	int i;
+
+	if (msg == NULL) {
+		ret = VIRTCHNL_STATUS_ERR_PARAM;
+		goto send_msg;
+	}
+
+	for (i = 0; i < vlan_list->num_elements; i++) {
+
+		/* TODO: vlan filter havn't been enabled yet */
+
+	}
+
+send_msg:
+	iavfbe_send_msg_to_vf(adapter, VIRTCHNL_OP_DEL_VLAN,
+			      ret, NULL, 0);
+	return ret;
+}
+
+static void
+iavfbe_execute_vf_cmd(struct iavfbe_adapter *adapter,
+			struct iavf_arq_event_info *event)
+{
+	enum virtchnl_ops msg_opc;
+	int ret;
+
+	msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32(
+		event->desc.cookie_high);
+	/* perform basic checks on the msg */
+	ret = virtchnl_vc_validate_vf_msg(&adapter->virtchnl_version, msg_opc,
+					  event->msg_buf, event->msg_len);
+	if (ret) {
+		IAVF_BE_LOG(ERR, "Invalid message opcode %u, len %u",
+			    msg_opc, event->msg_len);
+		iavfbe_send_msg_to_vf(adapter, msg_opc,
+				      VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
+				      NULL, 0);
+	}
+
+	switch (msg_opc) {
+	case VIRTCHNL_OP_VERSION:
+		IAVF_BE_LOG(INFO, "OP_VERSION received");
+		iavfbe_process_cmd_version(adapter, event->msg_buf);
+		break;
+	case VIRTCHNL_OP_RESET_VF:
+		IAVF_BE_LOG(INFO, "OP_RESET_VF received");
+		iavfbe_process_cmd_reset_vf(adapter);
+		break;
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+		IAVF_BE_LOG(INFO, "OP_GET_VF_RESOURCES received");
+		iavfbe_process_cmd_get_vf_resource(adapter, event->msg_buf);
+		break;
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_CONFIG_VSI_QUEUES received");
+		iavfbe_process_cmd_config_vsi_queues(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_ENABLE_QUEUES received");
+		iavfbe_process_cmd_enable_queues(adapter, event->msg_buf,
+						 event->msg_len);
+		iavfbe_notify_vf_link_status(adapter);
+		break;
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+		IAVF_BE_LOG(INFO, "OP_DISABLE_QUEUE received");
+		iavfbe_process_cmd_disable_queues(adapter, event->msg_buf,
+						  event->msg_len);
+		break;
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+		IAVF_BE_LOG(INFO, "OP_CONFIG_PROMISCUOUS_MODE received");
+		iavfbe_process_cmd_config_promisc_mode(adapter, event->msg_buf,
+						       event->msg_len);
+		break;
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_CONFIG_IRQ_MAP received");
+		iavfbe_process_cmd_config_irq_map(adapter, event->msg_buf,
+						  event->msg_len);
+		break;
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_ETH_ADDR received");
+		iavfbe_process_cmd_add_ether_address(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_DEL_ETH_ADDR received");
+		iavfbe_process_cmd_del_ether_address(adapter, event->msg_buf,
+						     event->msg_len);
+		break;
+	case VIRTCHNL_OP_GET_STATS:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_GET_STATS received");
+		iavfbe_process_cmd_get_stats(adapter, event->msg_buf,
+					     event->msg_len);
+		break;
+	case VIRTCHNL_OP_ADD_VLAN:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_VLAN received");
+		iavfbe_process_cmd_add_vlan(adapter, event->msg_buf,
+					    event->msg_len);
+		break;
+	case VIRTCHNL_OP_DEL_VLAN:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ADD_VLAN received");
+		iavfbe_process_cmd_del_vlan(adapter, event->msg_buf,
+					    event->msg_len);
+		break;
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING received");
+		iavfbe_process_cmd_enable_vlan_strip(adapter);
+		break;
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+		IAVF_BE_LOG(INFO, "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING received");
+		iavfbe_process_cmd_disable_vlan_strip(adapter);
+		break;
+	default:
+		IAVF_BE_LOG(ERR, "%u received, not supported", msg_opc);
+		iavfbe_send_msg_to_vf(adapter, msg_opc,
+				      VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
+				      NULL, 0);
+		break;
+	}
+
+}
+
 static inline int
 iavfbe_control_queue_remap(struct iavfbe_adapter *adapter,
 			  struct iavfbe_control_q *asq,
@@ -269,7 +1091,7 @@ iavfbe_handle_virtchnl_msg(void *arg)
 
 		switch (aq_opc) {
 		case iavf_aqc_opc_send_msg_to_pf:
-			/* Process msg from VF BE*/
+			iavfbe_execute_vf_cmd(adapter, &info);
 			break;
 		case iavf_aqc_opc_queue_shutdown:
 			iavfbe_reset_arq(adapter, true);
diff --git a/drivers/net/iavf_be/meson.build b/drivers/net/iavf_be/meson.build
index be13a2e492..e6b1c522a7 100644
--- a/drivers/net/iavf_be/meson.build
+++ b/drivers/net/iavf_be/meson.build
@@ -10,4 +10,5 @@ deps += ['bus_vdev', 'common_iavf', 'vfio_user', 'emu_iavf']
 sources = files(
 	'iavf_be_ethdev.c',
 	'iavf_be_vchnl.c',
+	'iavf_be_rxtx.c',
 )
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] net/iavf_be: add Rx Tx burst support
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
                     ` (2 preceding siblings ...)
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 3/6] net/iavf_be: virtchnl messages process Jingjing Wu
@ 2021-01-07  7:15   ` Jingjing Wu
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 5/6] net/iavf_be: extend backend to support iavf rxq_irq Jingjing Wu
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 6/6] doc: new net PMD iavf_be Jingjing Wu
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu, Miao Li

Enable packets revcieve and transmit functions.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Xiuchun Lu <xiuchun.lu@intel.com>
Signed-off-by: Miao Li <miao.li@intel.com>
---
 drivers/net/iavf_be/iavf_be_ethdev.c |   3 +
 drivers/net/iavf_be/iavf_be_rxtx.c   | 342 +++++++++++++++++++++++++++
 drivers/net/iavf_be/iavf_be_rxtx.h   |  60 +++++
 3 files changed, 405 insertions(+)

diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 940ed66ce4..4bf936f21b 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -864,6 +864,9 @@ eth_dev_iavfbe_create(struct rte_vdev_device *dev,
 	rte_ether_addr_copy(addr, &eth_dev->data->mac_addrs[0]);
 
 	eth_dev->dev_ops = &iavfbe_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &iavfbe_recv_pkts;
+	eth_dev->tx_pkt_burst = &iavfbe_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &iavfbe_prep_pkts;
 
 	eth_dev->data->dev_link = iavfbe_link;
 	eth_dev->data->numa_node = dev->device.numa_node;
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.c b/drivers/net/iavf_be/iavf_be_rxtx.c
index dd275b80c6..66f30cc0a8 100644
--- a/drivers/net/iavf_be/iavf_be_rxtx.c
+++ b/drivers/net/iavf_be/iavf_be_rxtx.c
@@ -162,3 +162,345 @@ iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 	qinfo->conf.offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
 	qinfo->conf.tx_deferred_start = false;
 }
+
+static inline void
+iavfbe_recv_offload(struct rte_mbuf *m,
+	uint16_t cmd, uint32_t offset)
+{
+	m->l2_len = offset & IAVF_TXD_QW1_MACLEN_MASK >>
+		IAVF_TX_DESC_LENGTH_MACLEN_SHIFT << 1;
+	m->l3_len = offset & IAVF_TXD_QW1_IPLEN_MASK >>
+		IAVF_TX_DESC_LENGTH_IPLEN_SHIFT << 2;
+	m->l4_len = offset & IAVF_TXD_QW1_L4LEN_MASK >>
+		IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT << 2;
+
+	switch (cmd & IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM) {
+	case IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM:
+		m->ol_flags = PKT_TX_IP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_IIPT_IPV4:
+		m->ol_flags = PKT_TX_IPV4;
+		break;
+	case IAVF_TX_DESC_CMD_IIPT_IPV6:
+		m->ol_flags = PKT_TX_IPV6;
+		break;
+	default:
+		break;
+	}
+
+	switch (cmd & IAVF_TX_DESC_CMD_L4T_EOFT_UDP) {
+	case IAVF_TX_DESC_CMD_L4T_EOFT_UDP:
+		m->ol_flags |= PKT_TX_UDP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_L4T_EOFT_SCTP:
+		m->ol_flags |= PKT_TX_SCTP_CKSUM;
+		break;
+	case IAVF_TX_DESC_CMD_L4T_EOFT_TCP:
+		m->ol_flags |= PKT_TX_TCP_CKSUM;
+		break;
+	default:
+		break;
+	}
+}
+
+/* RX function */
+uint16_t
+iavfbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct iavfbe_rx_queue *rxq = (struct iavfbe_rx_queue *)rx_queue;
+	struct iavfbe_adapter *adapter = (struct iavfbe_adapter *)rxq->adapter;
+	uint32_t nb_rx = 0;
+	uint16_t head, tail;
+	uint16_t cmd;
+	uint32_t offset;
+	volatile struct iavf_tx_desc *ring_dma;
+	struct rte_ether_addr *ea = NULL;
+	uint64_t ol_flags, tso_segsz = 0;
+
+	rte_spinlock_lock(&rxq->access_lock);
+
+	if (unlikely(rte_atomic32_read(&rxq->enable) == 0)) {
+		/* RX queue is not enable currently */
+		goto end_unlock;
+	}
+
+	ring_dma = rxq->tx_ring;
+	head = rxq->tx_head;
+	tail = (uint16_t)IAVFBE_READ_32(rxq->qtx_tail);
+
+	while (head != tail && nb_rx < nb_pkts) {
+		volatile struct iavf_tx_desc *d;
+		void *desc_addr;
+		uint64_t data_len, tmp;
+		struct rte_mbuf *cur, *rxm, *first = NULL;
+
+		ol_flags = 0;
+		while (1) {
+			d = &ring_dma[head];
+			head++;
+
+			if (unlikely(head == rxq->nb_rx_desc))
+				head = 0;
+
+			if ((head & 0x3) == 0) {
+				rte_prefetch0(&ring_dma[head]);
+			}
+
+			IAVF_BE_DUMP_TX_DESC(rxq, d, head);
+
+			if ((d->cmd_type_offset_bsz &
+			     IAVF_TXD_QW1_DTYPE_MASK) ==
+			    IAVF_TX_DESC_DTYPE_CONTEXT) {
+				ol_flags = PKT_TX_TCP_SEG;
+				tso_segsz = (d->cmd_type_offset_bsz &
+					     IAVF_TXD_CTX_QW1_MSS_MASK) >>
+					    IAVF_TXD_CTX_QW1_MSS_SHIFT;
+				d = &ring_dma[head];
+				head++;
+			}
+
+			cmd = (d->cmd_type_offset_bsz &IAVF_TXD_QW1_CMD_MASK) >>
+				IAVF_TXD_QW1_CMD_SHIFT;
+			offset = (d->cmd_type_offset_bsz & IAVF_TXD_QW1_OFFSET_MASK) >>
+				IAVF_TXD_QW1_OFFSET_SHIFT;
+
+			rxm = rte_pktmbuf_alloc(rxq->mp);
+			if (unlikely(rxm == NULL)) {
+				IAVF_BE_LOG(ERR, "Failed to allocate mbuf\n");
+				break;
+			}
+
+			data_len = (rte_le_to_cpu_64(d->cmd_type_offset_bsz)
+						& IAVF_TXD_QW1_TX_BUF_SZ_MASK)
+				>> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT;
+			if (data_len > rte_pktmbuf_tailroom(rxm)) {
+				rte_pktmbuf_free(rxm);
+				rte_pktmbuf_free(first);
+				goto end_of_recv;
+			}
+			tmp = data_len;
+			desc_addr = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+				adapter->mem_table, d->buffer_addr, &tmp);
+
+			rte_prefetch0(desc_addr);
+			rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+
+			rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+			rte_memcpy(rte_pktmbuf_mtod(rxm, void *), desc_addr, data_len);
+
+			rxm->nb_segs = 1;
+			rxm->next = NULL;
+			rxm->pkt_len = data_len;
+			rxm->data_len = data_len;
+
+			if (cmd & IAVF_TX_DESC_CMD_IL2TAG1)
+				rxm->vlan_tci = (d->cmd_type_offset_bsz &
+						 IAVF_TXD_QW1_L2TAG1_MASK) >>
+						IAVF_TXD_QW1_TX_BUF_SZ_SHIFT;
+
+			if (cmd & IAVF_TX_DESC_CMD_RS)
+				d->cmd_type_offset_bsz =
+					rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+
+			if (!first) {
+				first = rxm;
+				cur = rxm;
+				iavfbe_recv_offload(rxm, cmd, offset);
+				/* TSO enabled */
+				if (ol_flags & PKT_TX_TCP_SEG) {
+					rxm->tso_segsz = tso_segsz;
+					rxm->ol_flags |= ol_flags;
+				}
+			} else {
+				first->pkt_len += (uint32_t)data_len;
+				first->nb_segs++;
+				cur->next = rxm;
+				cur = rxm;
+			}
+
+			if (cmd & IAVF_TX_DESC_CMD_EOP)
+				break;
+		}
+
+		if ((!(ol_flags & PKT_TX_TCP_SEG)) &&
+		    (first->pkt_len > rxq->max_pkt_len)) {
+			rte_pktmbuf_free(first);
+			goto end_of_recv;
+		}
+
+		rx_pkts[nb_rx] = first;
+		nb_rx++;
+
+		/* Count multicast and broadcast */
+		ea = rte_pktmbuf_mtod(first, struct rte_ether_addr *);
+		if (rte_is_multicast_ether_addr(ea)) {
+			if (rte_is_broadcast_ether_addr(ea))
+				rxq->stats.recv_broad_num++;
+			else
+				rxq->stats.recv_multi_num++;
+		}
+
+		rxq->stats.recv_pkt_num++;
+		rxq->stats.recv_bytes += first->pkt_len;
+	}
+
+end_of_recv:
+	rxq->tx_head = head;
+end_unlock:
+	rte_spinlock_unlock(&rxq->access_lock);
+
+	return nb_rx;
+}
+
+/* TX function */
+uint16_t
+iavfbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct iavfbe_tx_queue *txq = (struct iavfbe_tx_queue *)tx_queue;
+	struct iavfbe_adapter *adapter = (struct iavfbe_adapter *)txq->adapter;
+	volatile union iavf_rx_desc *ring_dma;
+	volatile union iavf_rx_desc *d;
+	struct rte_ether_addr *ea = NULL;
+	struct rte_mbuf *pkt, *m;
+	uint16_t head, tail;
+	uint16_t nb_tx = 0;
+	uint16_t nb_avail; /* number of avail desc */
+	void *desc_addr;
+	uint64_t  len, data_len;
+	uint32_t pkt_len;
+	uint64_t qword1;
+
+	rte_spinlock_lock(&txq->access_lock);
+
+	if (unlikely(rte_atomic32_read(&txq->enable) == 0)) {
+		/* TX queue is not enable currently */
+		goto end_unlock;
+	}
+
+	len = 1;
+	head = txq->rx_head;
+	ring_dma = txq->rx_ring;
+	tail = (uint16_t)IAVFBE_READ_32(txq->qrx_tail);
+	nb_avail = (tail >= head) ?
+		(tail - head) : (txq->nb_tx_desc - tail + head);
+
+	while (nb_avail > 0 && nb_tx < nb_pkts) {
+		pkt = tx_pkts[nb_tx];
+		pkt_len = rte_pktmbuf_pkt_len(pkt);
+
+		if (pkt->nb_segs > nb_avail) /* no desc to use */
+			goto end_of_xmit;
+
+		m = pkt;
+
+		do {
+			qword1 = 0;
+			d = &ring_dma[head];
+			data_len = rte_pktmbuf_data_len(m);
+			desc_addr = (void *)(uintptr_t)rte_iavf_emu_get_dma_vaddr(
+				adapter->mem_table,
+				rte_le_to_cpu_64(d->read.pkt_addr),
+				&len);
+
+			rte_memcpy(desc_addr, rte_pktmbuf_mtod(m, void *),
+				   data_len);
+
+			/* If pkt carries vlan info, post it to descriptor */
+			if (m->ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+				qword1 |= 1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT;
+				d->wb.qword0.lo_dword.l2tag1 =
+					rte_cpu_to_le_16(pkt->vlan_tci);
+			}
+			m = m->next;
+			/* Mark the last desc with EOP flag */
+			if (!m)
+				qword1 |=
+					((1 << IAVF_RX_DESC_STATUS_EOF_SHIFT)
+					 << IAVF_RXD_QW1_STATUS_SHIFT);
+
+			qword1 = qword1 |
+				((1 << IAVF_RX_DESC_STATUS_DD_SHIFT)
+				<< IAVF_RXD_QW1_STATUS_SHIFT) |
+				((data_len << IAVF_RXD_QW1_LENGTH_PBUF_SHIFT)
+				& IAVF_RXD_QW1_LENGTH_PBUF_MASK);
+
+			rte_wmb();
+
+			d->wb.qword1.status_error_len = rte_cpu_to_le_64(qword1);
+
+			IAVF_BE_DUMP_RX_DESC(txq, d, head);
+
+			head++;
+			if (head >= txq->nb_tx_desc)
+				head = 0;
+
+			/* Prefetch next 4 RX descriptors */
+			if ((head & 0x3) == 0)
+				rte_prefetch0(d);
+		} while (m);
+
+		nb_avail -= pkt->nb_segs;
+
+		nb_tx++;
+
+		/* update stats */
+		ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *);
+		if (rte_is_multicast_ether_addr(ea)) {
+			if (rte_is_broadcast_ether_addr(ea))
+				txq->stats.sent_broad_num++;
+			else
+				txq->stats.sent_multi_num++;
+		}
+		txq->stats.sent_pkt_num++;
+		txq->stats.sent_bytes += pkt_len;
+		/* Free entire packet */
+		rte_pktmbuf_free(pkt);
+	}
+
+end_of_xmit:
+	txq->rx_head = head;
+	txq->stats.sent_miss_num += nb_pkts - nb_tx;
+end_unlock:
+	rte_spinlock_unlock(&txq->access_lock);
+
+	return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+iavfbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		 uint16_t nb_pkts)
+{
+	struct iavfbe_tx_queue *txq = (struct iavfbe_tx_queue *)tx_queue;
+	struct rte_mbuf *m;
+	uint16_t data_len;
+	uint32_t pkt_len;
+	int i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		pkt_len = rte_pktmbuf_pkt_len(m);
+
+		/* Check buffer len and packet len */
+		if (pkt_len > txq->max_pkt_size) {
+			rte_errno = EINVAL;
+			return i;
+		}
+		/* Cannot support a pkt using more than 5 descriptors */
+		if (m->nb_segs > AVF_RX_MAX_SEG) {
+			rte_errno = EINVAL;
+			return i;
+		}
+		do {
+			data_len = rte_pktmbuf_data_len(m);
+			if (data_len > txq->buffer_size) {
+				rte_errno = EINVAL;
+				return i;
+			}
+			m = m->next;
+		} while (m);
+	}
+
+	return i;
+}
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.h b/drivers/net/iavf_be/iavf_be_rxtx.h
index cc72769337..71495a21bd 100644
--- a/drivers/net/iavf_be/iavf_be_rxtx.h
+++ b/drivers/net/iavf_be/iavf_be_rxtx.h
@@ -101,5 +101,65 @@ void iavfbe_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void iavfbe_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_txq_info *qinfo);
+uint16_t iavfbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
+uint16_t iavfbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts);
+uint16_t iavfbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts);
+
+static inline
+void iavfbe_dump_rx_descriptor(struct iavfbe_tx_queue *txq,
+			    const void *desc,
+			    uint16_t rx_id)
+{
+	const union iavf_32byte_rx_desc *rx_desc = desc;
+
+	printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64
+	       " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", txq->queue_id,
+	       rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr,
+	       rx_desc->read.rsvd1, rx_desc->read.rsvd2);
+}
+
+/* All the descriptors are 16 bytes, so just use one of them
+ * to print the qwords
+ */
+static inline
+void iavfbe_dump_tx_descriptor(const struct iavfbe_rx_queue *rxq,
+			    const void *desc, uint16_t tx_id)
+{
+	const char *name;
+	const struct iavf_tx_desc *tx_desc = desc;
+	enum iavf_tx_desc_dtype_value type;
+
+	type = (enum iavf_tx_desc_dtype_value)rte_le_to_cpu_64(
+		tx_desc->cmd_type_offset_bsz &
+		rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK));
+	switch (type) {
+	case IAVF_TX_DESC_DTYPE_DATA:
+		name = "Tx_data_desc";
+		break;
+	case IAVF_TX_DESC_DTYPE_CONTEXT:
+		name = "Tx_context_desc";
+		break;
+	default:
+		name = "unknown_desc";
+		break;
+	}
+
+	printf("Queue %d %s %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n",
+	       rxq->queue_id, name, tx_id, tx_desc->buffer_addr,
+	       tx_desc->cmd_type_offset_bsz);
+}
+
+#ifdef DEBUG_DUMP_DESC
+#define IAVF_BE_DUMP_RX_DESC(rxq, desc, rx_id) \
+	iavfbe_dump_rx_descriptor(rxq, desc, rx_id)
+#define IAVF_BE_DUMP_TX_DESC(txq, desc, tx_id) \
+	iavfbe_dump_tx_descriptor(txq, desc, tx_id)
+#else
+#define IAVF_BE_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0)
+#define IAVF_BE_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0)
+#endif
 
 #endif /* _AVF_BE_RXTX_H_ */
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] net/iavf_be: extend backend to support iavf rxq_irq
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
                     ` (3 preceding siblings ...)
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 4/6] net/iavf_be: add Rx Tx burst support Jingjing Wu
@ 2021-01-07  7:15   ` Jingjing Wu
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 6/6] doc: new net PMD iavf_be Jingjing Wu
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 drivers/net/iavf_be/iavf_be_ethdev.c | 3 ++-
 drivers/net/iavf_be/iavf_be_rxtx.c   | 5 +++++
 drivers/net/iavf_be/iavf_be_vchnl.c  | 8 ++++++--
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/net/iavf_be/iavf_be_ethdev.c b/drivers/net/iavf_be/iavf_be_ethdev.c
index 4bf936f21b..76d6103c2f 100644
--- a/drivers/net/iavf_be/iavf_be_ethdev.c
+++ b/drivers/net/iavf_be/iavf_be_ethdev.c
@@ -382,7 +382,6 @@ iavfbe_new_device(struct rte_emudev *dev)
 	adapter->reset = (uint8_t *)(uintptr_t)addr;
 	IAVFBE_WRITE_32(adapter->reset, RTE_IAVF_EMU_RESET_COMPLETED);
 	adapter->started = 1;
-	printf("NEW DEVICE: memtable, %p\n", adapter->mem_table);
 
 	return 0;
 }
@@ -465,6 +464,7 @@ iavfbe_update_device(struct rte_emudev *dev)
 				"Can not get irq info of rxq %d\n", i);
 			return -1;
 		}
+		rxq->kickfd = irq_info.eventfd;
 		rte_atomic32_set(&rxq->irq_enable, irq_info.enable);
 	}
 
@@ -479,6 +479,7 @@ iavfbe_update_device(struct rte_emudev *dev)
 				"Can not get irq info of txq %d\n", i);
 			return -1;
 		}
+		txq->callfd = irq_info.eventfd;
 		rte_atomic32_set(&txq->irq_enable, irq_info.enable);
 	}
 
diff --git a/drivers/net/iavf_be/iavf_be_rxtx.c b/drivers/net/iavf_be/iavf_be_rxtx.c
index 66f30cc0a8..9da70976e1 100644
--- a/drivers/net/iavf_be/iavf_be_rxtx.c
+++ b/drivers/net/iavf_be/iavf_be_rxtx.c
@@ -5,6 +5,7 @@
 #include <unistd.h>
 #include <inttypes.h>
 #include <sys/queue.h>
+#include <sys/eventfd.h>
 
 #include <rte_string_fns.h>
 #include <rte_mbuf.h>
@@ -461,6 +462,10 @@ iavfbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 end_of_xmit:
 	txq->rx_head = head;
 	txq->stats.sent_miss_num += nb_pkts - nb_tx;
+
+	if (rte_atomic32_read(&txq->irq_enable) == true)
+		eventfd_write(txq->callfd, (eventfd_t)1);
+
 end_unlock:
 	rte_spinlock_unlock(&txq->access_lock);
 
diff --git a/drivers/net/iavf_be/iavf_be_vchnl.c b/drivers/net/iavf_be/iavf_be_vchnl.c
index 2195047280..243ad638f8 100644
--- a/drivers/net/iavf_be/iavf_be_vchnl.c
+++ b/drivers/net/iavf_be/iavf_be_vchnl.c
@@ -95,12 +95,15 @@ apply_tx_irq(struct iavfbe_tx_queue *txq, uint16_t vector)
 {
 	struct rte_emudev_irq_info info;
 
+	rte_spinlock_lock(&txq->access_lock);
 	txq->vector = vector;
 	if (rte_emudev_get_irq_info(txq->adapter->edev_id, vector, &info)) {
 		IAVF_BE_LOG(ERR, "Can not get irq info\n");
 		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
 	}
 	txq->callfd = info.eventfd;
+	rte_atomic32_set(&txq->irq_enable, info.enable);
+	rte_spinlock_unlock(&txq->access_lock);
 
 	return 0;
 }
@@ -110,12 +113,15 @@ apply_rx_irq(struct iavfbe_rx_queue *rxq, uint16_t vector)
 {
 	struct rte_emudev_irq_info info;
 
+	rte_spinlock_lock(&rxq->access_lock);
 	rxq->vector = vector;
 	if (rte_emudev_get_irq_info(rxq->adapter->edev_id, vector, &info)) {
 		IAVF_BE_LOG(ERR, "Can not get irq info\n");
 		return IAVF_ERR_DEVICE_NOT_SUPPORTED;
 	}
 	rxq->kickfd = info.eventfd;
+	rte_atomic32_set(&rxq->irq_enable, info.enable);
+	rte_spinlock_unlock(&rxq->access_lock);
 
 	return 0;
 }
@@ -557,13 +563,11 @@ iavfbe_process_cmd_config_irq_map(struct iavfbe_adapter *adapter,
 			txq = dev->data->tx_queues[j];
 
 			if ((1 << j) & map->rxq_map) {
-				txq->vector = vector_id;
 				ret = apply_tx_irq(txq, vector_id);
 				if (ret)
 					goto send_msg;
 			}
 			if ((1 << j) & map->txq_map) {
-				rxq->vector = vector_id;
 				ret = apply_rx_irq(rxq, vector_id);
 				if (ret)
 					goto send_msg;
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] doc: new net PMD iavf_be
  2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
                     ` (4 preceding siblings ...)
  2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 5/6] net/iavf_be: extend backend to support iavf rxq_irq Jingjing Wu
@ 2021-01-07  7:15   ` Jingjing Wu
  5 siblings, 0 replies; 13+ messages in thread
From: Jingjing Wu @ 2021-01-07  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, chenbo.xia, xiuchun.lu

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 MAINTAINERS                            |  6 +++
 doc/guides/nics/features/iavf_be.ini   | 11 ++++++
 doc/guides/nics/iavf_be.rst            | 53 ++++++++++++++++++++++++++
 doc/guides/nics/index.rst              |  1 +
 doc/guides/rel_notes/release_21_02.rst |  6 +++
 5 files changed, 77 insertions(+)
 create mode 100644 doc/guides/nics/features/iavf_be.ini
 create mode 100644 doc/guides/nics/iavf_be.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index bca206ba8f..5faf093571 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -707,6 +707,12 @@ F: drivers/net/iavf/
 F: drivers/common/iavf/
 F: doc/guides/nics/features/iavf*.ini
 
+Intel iavf_be
+M: Jingjing Wu <jingjing.wu@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/iavf_be/
+F: doc/guides/nics/features/iavf_be*.ini
+
 Intel ice
 M: Qiming Yang <qiming.yang@intel.com>
 M: Qi Zhang <qi.z.zhang@intel.com>
diff --git a/doc/guides/nics/features/iavf_be.ini b/doc/guides/nics/features/iavf_be.ini
new file mode 100644
index 0000000000..8528695d00
--- /dev/null
+++ b/doc/guides/nics/features/iavf_be.ini
@@ -0,0 +1,11 @@
+;
+; Supported features of the 'iavf_be' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Basic stats          = Y
+Scattered Rx         = Y
+x86-64               = Y
diff --git a/doc/guides/nics/iavf_be.rst b/doc/guides/nics/iavf_be.rst
new file mode 100644
index 0000000000..14e26853e9
--- /dev/null
+++ b/doc/guides/nics/iavf_be.rst
@@ -0,0 +1,53 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Intel Corporation.
+
+Poll Mode Driver for Emulated Backend of Intel® AVF
+===================================================
+
+Intel® AVF is an Ethernet SR-IOV Virtual Function with the same
+device id (8086:1889) on different Intel Ethernet Controller.
+
+Emulated Backend of Intel® AVF is software emulated device to provide
+IAVF compatible layout and acceleartion to the consumer who is using IAVF.
+The communication is using vfio-user protocol as transport mechanism.
+While, the Backend PMD driver is based on *librte_vfio_user* and *librte_emudev* libraries.
+
+PMD arguments
+-------------
+
+Below devargs are provided to setup iavf_be device in DPDK:
+
+#.  ``emu``:
+
+    The emudev name it depends on.
+    (required)
+
+#.  ``mac``:
+
+    It is the MAC address assigned to it, and Front End device would consider it as its default MAC. If no set, driver would take a random one.
+    (optional)
+
+Set up an iavf_be interface
+---------------------------
+
+The following example will set up an iavf_be interface in DPDK:
+
+.. code-block:: console
+
+    --vdev emu_iavf0,sock=/tmp/to/socket/emu_iavf0,queues=4 --vdev net_iavfbe0,emu=emu_iavf0,mac=00:11:22:33:44:55
+
+Features and Limitations of iavf_be PMD
+---------------------------------------
+Currently, the iavf_be PMD provides the basic functionality of packet reception, transmission and event handling.
+
+*   It has multiple queues support.
+
+*   It supports Base mode virtchnl messages processing.
+
+*   Don't need to stop RX/TX manually, stop guest or iavf driver on guest instead.
+
+*   It is running in Polling mode, no RX interrupt support.
+
+*   No MAC VLAN filtering support.
+
+*   No classification offload support.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 3443617755..bd764ccbb3 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -30,6 +30,7 @@ Network Interface Controller Drivers
     hinic
     hns3
     i40e
+    iavf_be
     ice
     igb
     igc
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index b310b67b7d..bd14d55fc6 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -83,6 +83,12 @@ New Features
 
   See :doc:`../prog_guide/emudev` for more information.
 
+* **Added iavf_be net driver.**
+
+  Added a Polling Mode Driver iavf_be as software backend for Intel® AVF Ethernet device.
+
+  See :doc:`../nics/iavf_be.rst` for more information.
+
 Removed Items
 -------------
 
-- 
2.21.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-01-07  7:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-19  7:54 [dpdk-dev] [PATCH v1 0/5] introduce iavf backend driver Jingjing Wu
2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 1/5] net/iavf_be: " Jingjing Wu
2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 2/5] net/iavf_be: control queue enabling Jingjing Wu
2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 3/5] net/iavf_be: virtchnl messages process Jingjing Wu
2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 4/5] net/iavf_be: add Rx Tx burst support Jingjing Wu
2020-12-19  7:54 ` [dpdk-dev] [PATCH v1 5/5] doc: new net PMD iavf_be Jingjing Wu
2021-01-07  7:14 ` [dpdk-dev] [PATCH v2 0/6] introduce iavf backend driver Jingjing Wu
2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 1/6] net/iavf_be: " Jingjing Wu
2021-01-07  7:14   ` [dpdk-dev] [PATCH v2 2/6] net/iavf_be: control queue enabling Jingjing Wu
2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 3/6] net/iavf_be: virtchnl messages process Jingjing Wu
2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 4/6] net/iavf_be: add Rx Tx burst support Jingjing Wu
2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 5/6] net/iavf_be: extend backend to support iavf rxq_irq Jingjing Wu
2021-01-07  7:15   ` [dpdk-dev] [PATCH v2 6/6] doc: new net PMD iavf_be Jingjing Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).