DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost
@ 2014-05-20  5:25 Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX Ouyang Changchun
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Ouyang Changchun @ 2014-05-20  5:25 UTC (permalink / raw)
  To: dev

This patch series support user space vhost zero copy. It removes packets copying between host and guest
in RX/TX. And it introduces an extra ring to store the detached mbufs. At initialization stage all mbufs
put into this ring; when one guest starts, vhost gets the available buffer address allocated by guest
for RX and translates them into host space addresses, then attaches them to mbufs and puts the attached
mbufs into mempool.

Queue starting and DMA refilling will get mbufs from mempool and use them to set the DMA addresses.
 
For TX, it gets the buffer addresses of available packets to be transmitted from guest and translates
them to host space addresses, then attaches them to mbufs and puts them to TX queues.
After TX finishes, it pulls mbufs out from mempool, detaches them and puts them back into the extra ring.

This patch series also implement queue start and stop functionality in IXGBE PMD; and enable hardware
loopback for VMDQ mode in IXGBE PMD.

Ouyang Changchun (3):
  Add API to support queue start and stop functionality for RX/TX.
  Implement queue start and stop functionality in IXGBE PMD; Enable
    hardware loopback for VMDQ mode in IXGBE PMD.
  Support user space vhost zero copy, it removes packets copying between
    host and guest in RX/TX.

 examples/vhost/main.c                    | 1410 ++++++++++++++++++++++++++----
 examples/vhost/virtio-net.c              |  120 ++-
 examples/vhost/virtio-net.h              |   15 +-
 lib/librte_eal/linuxapp/eal/eal_memory.c |    2 +-
 lib/librte_ether/rte_ethdev.c            |  104 +++
 lib/librte_ether/rte_ethdev.h            |   80 ++
 lib/librte_pmd_ixgbe/ixgbe_ethdev.c      |    4 +
 lib/librte_pmd_ixgbe/ixgbe_ethdev.h      |    8 +
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c        |  233 ++++-
 lib/librte_pmd_ixgbe/ixgbe_rxtx.h        |    6 +
 10 files changed, 1787 insertions(+), 195 deletions(-)

-- 
1.9.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v2 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX.
  2014-05-20  5:25 [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Ouyang Changchun
@ 2014-05-20  5:25 ` Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 2/3] ixgbe: Implement queue start and stop functionality in IXGBE PMD Ouyang Changchun
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Ouyang Changchun @ 2014-05-20  5:25 UTC (permalink / raw)
  To: dev

This patch adds API to support queue start and stop functionality for RX/TX.
It allows RX and TX queue is started or stopped one by one, instead of starting
and stopping all of them at the same time. 

Signed-off-by: Ouyang Changchun <changchun.ouyang@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c |   2 +-
 lib/librte_ether/rte_ethdev.c            | 104 +++++++++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.h            |  80 ++++++++++++++++++++++++
 3 files changed, 185 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index 69ad63e..dd10e15 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -134,6 +134,7 @@ rte_mem_virt2phy(const void *virtaddr)
 	uint64_t page, physaddr;
 	unsigned long virt_pfn;
 	int page_size;
+	off_t offset;
 
 	/* standard page size */
 	page_size = getpagesize();
@@ -145,7 +146,6 @@ rte_mem_virt2phy(const void *virtaddr)
 		return RTE_BAD_PHYS_ADDR;
 	}
 
-	off_t offset;
 	virt_pfn = (unsigned long)virtaddr / page_size;
 	offset = sizeof(uint64_t) * virt_pfn;
 	if (lseek(fd, offset, SEEK_SET) == (off_t) -1) {
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index ec411db..0008755 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -293,6 +293,110 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return (0);
 }
 
+int
+rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_eth_devices[port_id];
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		return (-EINVAL);
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, -ENOTSUP);
+
+	return dev->dev_ops->rx_queue_start(dev, rx_queue_id);
+
+}
+
+int
+rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_eth_devices[port_id];
+	if (rx_queue_id >= dev->data->nb_rx_queues) {
+		PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", rx_queue_id);
+		return (-EINVAL);
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_stop, -ENOTSUP);
+
+	return dev->dev_ops->rx_queue_stop(dev, rx_queue_id);
+
+}
+
+int
+rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_eth_devices[port_id];
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		return (-EINVAL);
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_start, -ENOTSUP);
+
+	return dev->dev_ops->tx_queue_start(dev, tx_queue_id);
+
+}
+
+int
+rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	/* This function is only safe when called from the primary process
+	 * in a multi-process setup*/
+	PROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);
+
+	if (port_id >= nb_ports) {
+		PMD_DEBUG_TRACE("Invalid port_id=%d\n", port_id);
+		return (-EINVAL);
+	}
+
+	dev = &rte_eth_devices[port_id];
+	if (tx_queue_id >= dev->data->nb_tx_queues) {
+		PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", tx_queue_id);
+		return (-EINVAL);
+	}
+
+	FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_stop, -ENOTSUP);
+
+	return dev->dev_ops->tx_queue_stop(dev, tx_queue_id);
+
+}
+
 static int
 rte_eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 {
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index cd4bec6..f2a8dc5 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -480,6 +480,7 @@ struct rte_eth_vmdq_rx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< VMDq only mode, 8 or 64 pools */
 	uint8_t enable_default_pool; /**< If non-zero, use a default pool */
 	uint8_t default_pool; /**< The default pool, if applicable */
+	uint8_t enable_loop_back; /**< Enable VT loop back */
 	uint8_t nb_pool_maps; /**< We can have up to 64 filters/mappings */
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
@@ -501,6 +502,7 @@ struct rte_eth_rxconf {
 	struct rte_eth_thresh rx_thresh; /**< RX ring threshold registers. */
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
+	uint8_t start_rx_per_q; /**< start rx per queue. */
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -521,6 +523,7 @@ struct rte_eth_txconf {
 	uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */
 	uint16_t tx_free_thresh; /**< Drives the freeing of TX buffers. */
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
+	uint8_t start_tx_per_q; /**< start tx per queue. */
 };
 
 /**
@@ -934,6 +937,14 @@ typedef void (*eth_dev_infos_get_t)(struct rte_eth_dev *dev,
 				    struct rte_eth_dev_info *dev_info);
 /**< @internal Get specific informations of an Ethernet device. */
 
+typedef int (* eth_queue_start_t)(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+/**< @internal Start rx and tx of a queue of an Ethernet device. */
+
+typedef int (* eth_queue_stop_t)(struct rte_eth_dev *dev,
+				    uint16_t queue_id);
+/**< @internal Stop rx and tx of a queue of an Ethernet device. */
+
 typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev,
 				    uint16_t rx_queue_id,
 				    uint16_t nb_rx_desc,
@@ -1237,6 +1248,10 @@ struct eth_dev_ops {
 	vlan_tpid_set_t            vlan_tpid_set;      /**< Outer VLAN TPID Setup. */
 	vlan_strip_queue_set_t     vlan_strip_queue_set; /**< VLAN Stripping on queue. */
 	vlan_offload_set_t         vlan_offload_set; /**< Set VLAN Offload. */
+	eth_queue_start_t          rx_queue_start;/**< Start RX for a queue.*/
+	eth_queue_stop_t           rx_queue_stop;/**< Stop RX for a queue.*/
+	eth_queue_start_t          tx_queue_start;/**< Start TX for a queue.*/
+	eth_queue_stop_t           tx_queue_stop;/**< Stop TX for a queue.*/
 	eth_rx_queue_setup_t       rx_queue_setup;/**< Set up device RX queue.*/
 	eth_queue_release_t        rx_queue_release;/**< Release RX queue.*/
 	eth_rx_queue_count_t       rx_queue_count; /**< Get Rx queue count. */
@@ -1733,6 +1748,71 @@ extern int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
  */
 extern int rte_eth_dev_socket_id(uint8_t port_id);
 
+/*
+ * Start specified RX queue of a port
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device
+ * @param rx_queue_id
+ *   The index of the rx queue to update the ring.
+ *   The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ *   to rte_eth_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The port_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id);
+
+/*
+ * Stop specified RX queue of a port
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device
+ * @param rx_queue_id
+ *   The index of the rx queue to update the ring.
+ *   The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ *   to rte_eth_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The port_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id);
+
+/*
+ * Start specified TX queue of a port
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device
+ * @param tx_queue_id
+ *   The index of the tx queue to update the ring.
+ *   The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ *   to rte_eth_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The port_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id);
+
+/*
+ * Stop specified TX queue of a port
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device
+ * @param tx_queue_id
+ *   The index of the tx queue to update the ring.
+ *   The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ *   to rte_eth_dev_configure().
+ * @return
+ *   - 0: Success, the transmit queue is correctly set up.
+ *   - -EINVAL: The port_id or the queue_id out of range.
+ *   - -ENOTSUP: The function not supported in PMD driver.
+ */
+extern int rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id);
+
+
 
 /**
  * Start an Ethernet device.
-- 
1.9.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v2 2/3] ixgbe: Implement queue start and stop functionality in IXGBE PMD
  2014-05-20  5:25 [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX Ouyang Changchun
@ 2014-05-20  5:25 ` Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 3/3] examples/vhost: Support user space vhost zero copy Ouyang Changchun
  2014-05-27 23:01 ` [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Thomas Monjalon
  3 siblings, 0 replies; 6+ messages in thread
From: Ouyang Changchun @ 2014-05-20  5:25 UTC (permalink / raw)
  To: dev

This patch implements queue start and stop functionality in IXGBE PMD;
it also enables hardware loopback for VMDQ mode in IXGBE PMD.

Signed-off-by: Ouyang Changchun <changchun.ouyang@intel.com>
---
 lib/librte_pmd_ixgbe/ixgbe_ethdev.c |   4 +
 lib/librte_pmd_ixgbe/ixgbe_ethdev.h |   8 ++
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c   | 233 ++++++++++++++++++++++++++++++------
 lib/librte_pmd_ixgbe/ixgbe_rxtx.h   |   6 +
 4 files changed, 215 insertions(+), 36 deletions(-)

diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index 49ff0d1..62a6d77 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -275,6 +275,10 @@ static struct eth_dev_ops ixgbe_eth_dev_ops = {
 	.vlan_tpid_set        = ixgbe_vlan_tpid_set,
 	.vlan_offload_set     = ixgbe_vlan_offload_set,
 	.vlan_strip_queue_set = ixgbe_vlan_strip_queue_set,
+	.rx_queue_start	      = ixgbe_dev_rx_queue_start,
+	.rx_queue_stop        = ixgbe_dev_rx_queue_stop,
+	.tx_queue_start	      = ixgbe_dev_tx_queue_start,
+	.tx_queue_stop        = ixgbe_dev_tx_queue_stop,
 	.rx_queue_setup       = ixgbe_dev_rx_queue_setup,
 	.rx_queue_release     = ixgbe_dev_rx_queue_release,
 	.rx_queue_count       = ixgbe_dev_rx_queue_count,
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
index 7c6139b..ae52c8e 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.h
@@ -245,6 +245,14 @@ void ixgbe_dev_tx_init(struct rte_eth_dev *dev);
 
 void ixgbe_dev_rxtx_start(struct rte_eth_dev *dev);
 
+int ixgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int ixgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
+int ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
 int ixgbevf_dev_rx_init(struct rte_eth_dev *dev);
 
 void ixgbevf_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 55414b9..2a98051 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -1588,7 +1588,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
  * descriptors should meet the following condition:
  *      (num_ring_desc * sizeof(rx/tx descriptor)) % 128 == 0
  */
-#define IXGBE_MIN_RING_DESC 64
+#define IXGBE_MIN_RING_DESC 32
 #define IXGBE_MAX_RING_DESC 4096
 
 /*
@@ -1836,6 +1836,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->port_id = dev->data->port_id;
 	txq->txq_flags = tx_conf->txq_flags;
 	txq->ops = &def_txq_ops;
+	txq->start_tx_per_q= tx_conf->start_tx_per_q;
 
 	/*
 	 * Modification to set VFTDT for virtual function if vf is detected
@@ -2078,6 +2079,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ?
 							0 : ETHER_CRC_LEN);
 	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->start_rx_per_q= rx_conf->start_rx_per_q;
 
 	/*
 	 * Allocate RX ring hardware descriptors. A memzone large enough to
@@ -3025,6 +3027,14 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	}
 
+	/* PFDMA Tx General Switch Control Enables VMDQ loopback */
+	if (cfg->enable_loop_back){
+		IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN);
+		for(i = 0; i < RTE_IXGBE_VMTXSW_REGISTER_COUNT; i++) {
+			IXGBE_WRITE_REG(hw, IXGBE_VMTXSW(i), UINT32_MAX);
+		}
+	}
+
 	IXGBE_WRITE_FLUSH(hw);
 }
 
@@ -3234,7 +3244,6 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	uint32_t rxcsum;
 	uint16_t buf_size;
 	uint16_t i;
-	int ret;
 	
 	PMD_INIT_FUNC_TRACE();
 	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -3289,11 +3298,6 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		/* Allocate buffers for descriptor rings */
-		ret = ixgbe_alloc_rx_queue_mbufs(rxq);
-		if (ret)
-			return ret;
-
 		/*
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
@@ -3500,10 +3504,8 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 	struct igb_rx_queue *rxq;
 	uint32_t txdctl;
 	uint32_t dmatxctl;
-	uint32_t rxdctl;
 	uint32_t rxctrl;
 	uint16_t i;
-	int poll_ms;
 
 	PMD_INIT_FUNC_TRACE();
 	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -3526,55 +3528,214 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
-		txdctl |= IXGBE_TXDCTL_ENABLE;
-		IXGBE_WRITE_REG(hw, IXGBE_TXDCTL(txq->reg_idx), txdctl);
-
-		/* Wait until TX Enable ready */
-		if (hw->mac.type == ixgbe_mac_82599EB) {
-			poll_ms = 10;
-			do {
-				rte_delay_ms(1);
-				txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
-			} while (--poll_ms && !(txdctl & IXGBE_TXDCTL_ENABLE));
-			if (!poll_ms)
-				PMD_INIT_LOG(ERR, "Could not enable "
-					     "Tx Queue %d\n", i);
+		if (!txq->start_tx_per_q) {
+			ixgbe_dev_tx_queue_start(dev, i);
 		}
 	}
+
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
+		if (!rxq->start_rx_per_q) {
+			ixgbe_dev_rx_queue_start(dev, i);
+		}
+	}
+
+	/* Enable Receive engine */
+	rxctrl = IXGBE_READ_REG(hw, IXGBE_RXCTRL);
+	if (hw->mac.type == ixgbe_mac_82598EB)
+		rxctrl |= IXGBE_RXCTRL_DMBYPS;
+	rxctrl |= IXGBE_RXCTRL_RXEN;
+	hw->mac.ops.enable_rx_dma(hw, rxctrl);
+
+	/* If loopback mode is enabled for 82599, set up the link accordingly */
+	if (hw->mac.type == ixgbe_mac_82599EB &&
+			dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
+		ixgbe_setup_loopback_link_82599(hw);
+
+}
+
+/*
+ * Start Receive Units for specified queue.
+ */
+int
+ixgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ixgbe_hw     *hw;
+	struct igb_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
+
+		/* Allocate buffers for descriptor rings */
+		if (ixgbe_alloc_rx_queue_mbufs(rxq) != 0) {
+			PMD_INIT_LOG(ERR, "Could not alloc mbuf for queue:%d\n", rx_queue_id);
+			return -1;
+		}
 		rxdctl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
 		rxdctl |= IXGBE_RXDCTL_ENABLE;
 		IXGBE_WRITE_REG(hw, IXGBE_RXDCTL(rxq->reg_idx), rxdctl);
 
 		/* Wait until RX Enable ready */
-		poll_ms = 10;
+		poll_ms = RTE_IXGBE_REGISTER_POLL_WAIT_10_MS;
 		do {
 			rte_delay_ms(1);
 			rxdctl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
 		} while (--poll_ms && !(rxdctl & IXGBE_RXDCTL_ENABLE));
 		if (!poll_ms)
 			PMD_INIT_LOG(ERR, "Could not enable "
-				     "Rx Queue %d\n", i);
+				     "Rx Queue %d\n", rx_queue_id);
 		rte_wmb();
+		IXGBE_WRITE_REG(hw, IXGBE_RDH(rxq->reg_idx), 0);
 		IXGBE_WRITE_REG(hw, IXGBE_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
-	}
+	} else
+		return -1;
 
-	/* Enable Receive engine */
-	rxctrl = IXGBE_READ_REG(hw, IXGBE_RXCTRL);
-	if (hw->mac.type == ixgbe_mac_82598EB)
-		rxctrl |= IXGBE_RXCTRL_DMBYPS;
-	rxctrl |= IXGBE_RXCTRL_RXEN;
-	hw->mac.ops.enable_rx_dma(hw, rxctrl);
+	return 0;
+}
 
-	/* If loopback mode is enabled for 82599, set up the link accordingly */
-	if (hw->mac.type == ixgbe_mac_82599EB &&
-			dev->data->dev_conf.lpbk_mode == IXGBE_LPBK_82599_TX_RX)
-		ixgbe_setup_loopback_link_82599(hw);
+/*
+ * Stop Receive Units for specified queue.
+ */
+int
+ixgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ixgbe_hw     *hw;
+	struct igb_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (rx_queue_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rx_queue_id];
 
+		rxdctl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
+		rxdctl &= ~IXGBE_RXDCTL_ENABLE;
+		IXGBE_WRITE_REG(hw, IXGBE_RXDCTL(rxq->reg_idx), rxdctl);
+
+		/* Wait until RX Enable ready */
+		poll_ms = RTE_IXGBE_REGISTER_POLL_WAIT_10_MS;
+		do {
+			rte_delay_ms(1);
+			rxdctl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
+		} while (--poll_ms && (rxdctl | IXGBE_RXDCTL_ENABLE));
+		if (!poll_ms)
+			PMD_INIT_LOG(ERR, "Could not disable "
+				     "Rx Queue %d\n", rx_queue_id);
+
+		rte_delay_us(RTE_IXGBE_WAIT_100_US);
+
+		ixgbe_rx_queue_release_mbufs(rxq);
+		ixgbe_reset_rx_queue(rxq);
+	} else
+		return -1;
+
+	return 0;
+}
+
+
+/*
+ * Start Transmit Units for specified queue.
+ */
+int
+ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ixgbe_hw     *hw;
+	struct igb_tx_queue *txq;
+	uint32_t txdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (tx_queue_id < dev->data->nb_tx_queues) {
+		txq = dev->data->tx_queues[tx_queue_id];
+		txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
+		txdctl |= IXGBE_TXDCTL_ENABLE;
+		IXGBE_WRITE_REG(hw, IXGBE_TXDCTL(txq->reg_idx), txdctl);
+
+		/* Wait until TX Enable ready */
+		if (hw->mac.type == ixgbe_mac_82599EB) {
+			poll_ms = RTE_IXGBE_REGISTER_POLL_WAIT_10_MS;
+			do {
+				rte_delay_ms(1);
+				txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
+			} while (--poll_ms && !(txdctl & IXGBE_TXDCTL_ENABLE));
+			if (!poll_ms)
+				PMD_INIT_LOG(ERR, "Could not enable "
+					     "Tx Queue %d\n", tx_queue_id);
+		}
+		rte_wmb();
+		IXGBE_WRITE_REG(hw, IXGBE_TDH(txq->reg_idx), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_TDT(txq->reg_idx), 0);
+	} else
+		return -1;
+
+	return 0;
 }
 
+/*
+ * Stop Transmit Units for specified queue.
+ */
+int
+ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ixgbe_hw     *hw;
+	struct igb_tx_queue *txq;
+	uint32_t txdctl;
+	uint32_t txtdh, txtdt;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+	if (tx_queue_id < dev->data->nb_tx_queues) {
+		txq = dev->data->tx_queues[tx_queue_id];
+
+		/* Wait until TX queue is empty */
+		if (hw->mac.type == ixgbe_mac_82599EB) {
+			poll_ms = RTE_IXGBE_REGISTER_POLL_WAIT_10_MS;
+			do {
+				rte_delay_us(RTE_IXGBE_WAIT_100_US);
+				txtdh = IXGBE_READ_REG(hw, IXGBE_TDH(txq->reg_idx));
+				txtdt = IXGBE_READ_REG(hw, IXGBE_TDT(txq->reg_idx));
+			} while (--poll_ms && (txtdh != txtdt));
+			if (!poll_ms)
+				PMD_INIT_LOG(ERR, "Tx Queue %d is not empty when stopping.\n",
+					tx_queue_id);
+		}
+
+		txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
+		txdctl &= ~IXGBE_TXDCTL_ENABLE;
+		IXGBE_WRITE_REG(hw, IXGBE_TXDCTL(txq->reg_idx), txdctl);
+
+		/* Wait until TX Enable ready */
+		if (hw->mac.type == ixgbe_mac_82599EB) {
+			poll_ms = RTE_IXGBE_REGISTER_POLL_WAIT_10_MS;
+			do {
+				rte_delay_ms(1);
+				txdctl = IXGBE_READ_REG(hw, IXGBE_TXDCTL(txq->reg_idx));
+			} while (--poll_ms && (txdctl | IXGBE_TXDCTL_ENABLE));
+			if (!poll_ms)
+				PMD_INIT_LOG(ERR, "Could not disable "
+					     "Tx Queue %d\n", tx_queue_id);
+		}
+
+		if (txq->ops != NULL) {
+			txq->ops->release_mbufs(txq);
+			txq->ops->reset(txq);
+		}
+	} else
+		return -1;
+
+	return 0;
+}
 
 /*
  * [VF] Initializes Receive Unit.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index 446eeb7..f9d0177 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -67,6 +67,10 @@
 #define rte_packet_prefetch(p)  do {} while(0)
 #endif
 
+#define RTE_IXGBE_REGISTER_POLL_WAIT_10_MS  10
+#define RTE_IXGBE_WAIT_100_US               100
+#define RTE_IXGBE_VMTXSW_REGISTER_COUNT     2
+
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
  */
@@ -129,6 +133,7 @@ struct igb_rx_queue {
 	uint8_t             port_id;  /**< Device port identifier. */
 	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
+	uint8_t             start_rx_per_q;
 #ifdef RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
@@ -193,6 +198,7 @@ struct igb_tx_queue {
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
 	struct ixgbe_txq_ops *ops;          /**< txq ops */
+	uint8_t             start_tx_per_q;
 };
 
 struct ixgbe_txq_ops {
-- 
1.9.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v2 3/3] examples/vhost: Support user space vhost zero copy
  2014-05-20  5:25 [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX Ouyang Changchun
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 2/3] ixgbe: Implement queue start and stop functionality in IXGBE PMD Ouyang Changchun
@ 2014-05-20  5:25 ` Ouyang Changchun
  2014-05-27 23:01 ` [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Thomas Monjalon
  3 siblings, 0 replies; 6+ messages in thread
From: Ouyang Changchun @ 2014-05-20  5:25 UTC (permalink / raw)
  To: dev

This patch supports user space vhost zero copy. It removes packets copying between host and guest in RX/TX.
It introduces an extra ring to store the detached mbufs. At initialization stage all mbufs will put into
this ring; when one guest starts, vhost gets the available buffer address allocated by guest for RX and
translates them into host space addresses, then attaches them to mbufs and puts the attached mbufs into
mempool.
Queue starting and DMA refilling will get mbufs from mempool and use them to set the DMA addresses.
 
For TX, it gets the buffer addresses of available packets to be transmitted from guest and translates
them to host space addresses, then attaches them to mbufs and puts them to TX queues.
After TX finishes, it pulls mbufs out from mempool, detaches them and puts them back into the extra ring.

Signed-off-by: Ouyang Changchun <changchun.ouyang@intel.com>
---
 examples/vhost/main.c       | 1410 ++++++++++++++++++++++++++++++++++++++-----
 examples/vhost/virtio-net.c |  120 +++-
 examples/vhost/virtio-net.h |   15 +-
 3 files changed, 1387 insertions(+), 158 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 816a71a..674608c 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -48,6 +48,7 @@
 #include <rte_ethdev.h>
 #include <rte_log.h>
 #include <rte_string_fns.h>
+#include <rte_malloc.h>
 
 #include "main.h"
 #include "virtio-net.h"
@@ -70,6 +71,14 @@
 #define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
 
 /*
+ * No frame data buffer allocated from host are required for zero copy implementation,
+ * guest will allocate the frame data buffer, and vhost directly use it.
+ */
+#define VIRTIO_DESCRIPTOR_LEN_ZCP 1518
+#define MBUF_SIZE_ZCP (VIRTIO_DESCRIPTOR_LEN_ZCP + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_CACHE_SIZE_ZCP 0
+
+/*
  * RX and TX Prefetch, Host, and Write-back threshold values should be
  * carefully set for optimal performance. Consult the network
  * controller's datasheet and supporting DPDK documentation for guidance
@@ -108,6 +117,24 @@
 #define RTE_TEST_RX_DESC_DEFAULT 1024 
 #define RTE_TEST_TX_DESC_DEFAULT 512
 
+/*
+ * Need refine these 2 macros for legacy and DPDK based front end:
+ * Max vring avail descriptor/entries from guest - MAX_PKT_BURST
+ * And then adjust power 2.
+ */
+/*
+ * For legacy front end, 128 descriptors,
+ * half for virtio header, another half for mbuf.
+ */
+#define RTE_TEST_RX_DESC_DEFAULT_ZCP 32   /* legacy: 32, DPDK virt FE: 128. */
+#define RTE_TEST_TX_DESC_DEFAULT_ZCP 64   /* legacy: 64, DPDK virt FE: 64.  */
+
+/* Get first 4 bytes in mbuf headroom. */
+#define MBUF_HEADROOM_UINT32(mbuf) (*(uint32_t*)((uint8_t*)(mbuf) + sizeof(struct rte_mbuf)))
+
+/* true if x is a power of 2 */
+#define POWEROF2(x) ((((x)-1) & (x)) == 0)
+
 #define INVALID_PORT_ID 0xFF
 
 /* Max number of devices. Limited by vmdq. */
@@ -138,8 +165,39 @@ static uint32_t num_switching_cores = 0;
 static uint32_t num_queues = 0;
 uint32_t num_devices = 0;
 
+/* Enable zero copy, pkts buffer will directly dma to hw descriptor, disabled on default*/
+static uint32_t zero_copy = 0;
+
+/* number of descriptors to apply*/
+static uint32_t num_rx_descriptor = RTE_TEST_RX_DESC_DEFAULT_ZCP;
+static uint32_t num_tx_descriptor = RTE_TEST_TX_DESC_DEFAULT_ZCP;
+
+/* max ring descriptor, ixgbe, i40e, e1000 all are 4096. */
+#define MAX_RING_DESC 4096
+
+struct vpool {
+	struct rte_mempool * pool;
+	struct rte_ring * ring;
+	uint32_t buf_size;
+} vpool_array[MAX_QUEUES+MAX_QUEUES];
+
 /* Enable VM2VM communications. If this is disabled then the MAC address compare is skipped. */
-static uint32_t enable_vm2vm = 1;
+typedef enum {
+	VM2VM_DISABLED = 0,
+	VM2VM_SOFTWARE = 1,
+	VM2VM_HARDWARE = 2,
+	VM2VM_LAST
+} vm2vm_type;
+static vm2vm_type vm2vm_mode = VM2VM_SOFTWARE;
+
+/* The type of host physical address translated from guest physical address. */
+typedef enum {
+	PHYS_ADDR_CONTINUOUS = 0,
+	PHYS_ADDR_CROSS_SUBREG = 1,
+	PHYS_ADDR_INVALID = 2,
+	PHYS_ADDR_LAST
+} hpa_type;
+
 /* Enable stats. */
 static uint32_t enable_stats = 0;
 /* Enable retries on RX. */
@@ -159,7 +217,7 @@ static uint32_t dev_index = 0;
 extern uint64_t VHOST_FEATURES;
 
 /* Default configuration for rx and tx thresholds etc. */
-static const struct rte_eth_rxconf rx_conf_default = {
+static struct rte_eth_rxconf rx_conf_default = {
 	.rx_thresh = {
 		.pthresh = RX_PTHRESH,
 		.hthresh = RX_HTHRESH,
@@ -173,7 +231,7 @@ static const struct rte_eth_rxconf rx_conf_default = {
  * Controller and the DPDK ixgbe/igb PMD. Consider using other values for other
  * network controllers and/or network drivers.
  */
-static const struct rte_eth_txconf tx_conf_default = {
+static struct rte_eth_txconf tx_conf_default = {
 	.tx_thresh = {
 		.pthresh = TX_PTHRESH,
 		.hthresh = TX_HTHRESH,
@@ -184,7 +242,7 @@ static const struct rte_eth_txconf tx_conf_default = {
 };
 
 /* empty vmdq configuration structure. Filled in programatically */
-static const struct rte_eth_conf vmdq_conf_default = {
+static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
 		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
@@ -223,6 +281,7 @@ static unsigned lcore_ids[RTE_MAX_LCORE];
 static uint8_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports = 0; /**< The number of ports specified in command line */
 
+static const uint16_t external_pkt_default_vlan_tag = 2000;
 const uint16_t vlan_tags[] = {
 	1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007,
 	1008, 1009, 1010, 1011,	1012, 1013, 1014, 1015,
@@ -254,6 +313,9 @@ struct mbuf_table {
 /* TX queue for each data core. */
 struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE];
 
+/* TX queue fori each virtio device for zero copy. */
+struct mbuf_table tx_queue_zcp[MAX_QUEUES];
+
 /* Vlan header struct used to insert vlan tags on TX. */
 struct vlan_ethhdr {
 	unsigned char   h_dest[ETH_ALEN];
@@ -263,6 +325,20 @@ struct vlan_ethhdr {
 	__be16          h_vlan_encapsulated_proto;
 };
 
+/* IPv4 Header */
+struct ipv4_hdr {
+	uint8_t  version_ihl;		/**< version and header length */
+	uint8_t  type_of_service;	/**< type of service */
+	uint16_t total_length;		/**< length of packet */
+	uint16_t packet_id;		/**< packet ID */
+	uint16_t fragment_offset;	/**< fragmentation offset */
+	uint8_t  time_to_live;		/**< time to live */
+	uint8_t  next_proto_id;		/**< protocol ID */
+	uint16_t hdr_checksum;		/**< header checksum */
+	uint32_t src_addr;		/**< source address */
+	uint32_t dst_addr;		/**< destination address */
+} __attribute__((__packed__));
+
 /* Header lengths. */
 #define VLAN_HLEN       4
 #define VLAN_ETH_HLEN   18
@@ -270,9 +346,11 @@ struct vlan_ethhdr {
 /* Per-device statistics struct */
 struct device_statistics {
 	uint64_t tx_total;
-	rte_atomic64_t rx_total;
+	rte_atomic64_t rx_total_atomic;
+	uint64_t rx_total;
 	uint64_t tx;
-	rte_atomic64_t rx;
+	rte_atomic64_t rx_atomic;
+	uint64_t rx;
 } __rte_cache_aligned;
 struct device_statistics dev_statistics[MAX_DEVICES];
 
@@ -289,6 +367,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_devices)
 	memset(&conf, 0, sizeof(conf));
 	conf.nb_queue_pools = (enum rte_eth_nb_pools)num_devices;
 	conf.nb_pool_maps = num_devices;
+	conf.enable_loop_back = vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.enable_loop_back;
 
 	for (i = 0; i < conf.nb_pool_maps; i++) {
 		conf.pool_map[i].vlan_id = vlan_tags[ i ];
@@ -321,12 +400,12 @@ validate_num_devices(uint32_t max_nb_devices)
  * coming from the mbuf_pool passed as parameter
  */
 static inline int
-port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+port_init(uint8_t port)
 {
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_conf port_conf;
-	uint16_t rx_rings, tx_rings = (uint16_t)rte_lcore_count();
-	const uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT, tx_ring_size = RTE_TEST_TX_DESC_DEFAULT;
+	uint16_t rx_rings, tx_rings;
+	uint16_t rx_ring_size, tx_ring_size;
 	int retval;
 	uint16_t q;
 
@@ -337,6 +416,16 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	num_devices = dev_info.max_vmdq_pools;
 	num_queues = dev_info.max_rx_queues;
 
+	if (zero_copy) {
+		rx_ring_size = num_rx_descriptor;
+		tx_ring_size = num_tx_descriptor;
+		tx_rings = dev_info.max_tx_queues;
+	} else {
+		rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
+		tx_ring_size = RTE_TEST_TX_DESC_DEFAULT;
+		tx_rings = (uint16_t)rte_lcore_count();
+	}
+
 	retval = validate_num_devices(MAX_DEVICES);
 	if (retval < 0)
 		return retval;
@@ -358,7 +447,7 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	for (q = 0; q < rx_rings; q ++) {
 		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
 						rte_eth_dev_socket_id(port), &rx_conf_default,
-						mbuf_pool);
+						vpool_array[q].pool);
 		if (retval < 0)
 			return retval;
 	}
@@ -371,8 +460,10 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 
 	/* Start the device. */
 	retval  = rte_eth_dev_start(port);
-	if (retval < 0)
+	if (retval < 0) {
+		RTE_LOG(ERR, DATA, "Failed to start the device.\n");
 		return retval;
+	}
 
 	rte_eth_macaddr_get(port, &vmdq_ports_eth_addr[port]);
 	RTE_LOG(INFO, PORT, "Max virtio devices supported: %u\n", num_devices);
@@ -457,16 +548,19 @@ parse_num_opt(const char *q_arg, uint32_t max_valid_value)
 static void
 us_vhost_usage(const char *prgname)
 {
-	RTE_LOG(INFO, CONFIG, "%s [EAL options] -- -p PORTMASK --vm2vm [0|1] --rx_retry [0|1] --mergeable [0|1] --stats [0-N] --dev-basename <name> --dev-index [0-N] --nb-devices ND\n"
+	RTE_LOG(INFO, CONFIG, "%s [EAL options] -- -p PORTMASK --vm2vm [0|1|2] --rx_retry [0|1] --mergeable [0|1] --stats [0-N] --dev-basename <name> --dev-index [0-N] --nb-devices ND\n"
 	"		-p PORTMASK: Set mask for ports to be used by application\n"
-	"		--vm2vm [0|1]: disable/enable(default) vm2vm comms\n"
+	"		--vm2vm [0|1|2]: disable/software(default)/hardware vm2vm comms\n"
 	"		--rx-retry [0|1]: disable/enable(default) retries on rx. Enable retry if destintation queue is full\n"
 	"		--rx-retry-delay [0-N]: timeout(in usecond) between retries on RX. This makes effect only if retries on rx enabled\n"
 	"		--rx-retry-num [0-N]: the number of retries on rx. This makes effect only if retries on rx enabled\n"
 	"		--mergeable [0|1]: disable(default)/enable RX mergeable buffers\n"
 	"		--stats [0-N]: 0: Disable stats, N: Time in seconds to print stats\n"
 	"		--dev-basename: The basename to be used for the character device.\n"
-	"		--dev-index [0-N]: Defaults to zero if not used. Index is appended to basename.\n",
+	"		--dev-index [0-N]: Defaults to zero if not used. Index is appended to basename.\n"
+	"		--zero-copy [0|1]: disable(default)/enable rx/tx zero copy\n"
+	"		--rx-desc-num [0-N]: the number of descriptors on rx, used only when zero copy is enabled.\n"
+	"		--tx-desc-num [0-N]: the number of descriptors on tx, used only when zero copy is enabled.\n",
 	       prgname);
 }
 
@@ -489,7 +583,10 @@ us_vhost_parse_args(int argc, char **argv)
 		{"stats", required_argument, NULL, 0},
 		{"dev-basename", required_argument, NULL, 0},
 		{"dev-index", required_argument, NULL, 0},
-		{NULL, 0, 0, 0}
+		{"zero-copy", required_argument, NULL, 0},
+		{"rx-desc-num", required_argument, NULL, 0},
+		{"tx-desc-num", required_argument, NULL, 0},
+		{NULL, 0, 0, 0},
 	};
 
 	/* Parse command line */
@@ -508,13 +605,13 @@ us_vhost_parse_args(int argc, char **argv)
 		case 0:
 			/* Enable/disable vm2vm comms. */
 			if (!strncmp(long_option[option_index].name, "vm2vm", MAX_LONG_OPT_SZ)) {
-				ret = parse_num_opt(optarg, 1);
+				ret = parse_num_opt(optarg, (VM2VM_LAST - 1));
 				if (ret == -1) {
-					RTE_LOG(INFO, CONFIG, "Invalid argument for vm2vm [0|1]\n");
+					RTE_LOG(INFO, CONFIG, "Invalid argument for vm2vm [0|1|2]\n");
 					us_vhost_usage(prgname);
 					return -1;
 				} else {
-					enable_vm2vm = ret;
+					vm2vm_mode = (vm2vm_type)ret;
 				}
 			}
 
@@ -600,6 +697,51 @@ us_vhost_parse_args(int argc, char **argv)
 				}
 			}
 
+			/* Enable/disable rx/tx zero copy. */
+			if (!strncmp(long_option[option_index].name, "zero-copy", MAX_LONG_OPT_SZ)) {
+				ret = parse_num_opt(optarg, 1);
+				if (ret == -1) {
+					RTE_LOG(INFO, CONFIG, "Invalid argument for zero-copy [0|1]\n");
+					us_vhost_usage(prgname);
+					return -1;
+				} else {
+					zero_copy= ret;
+				}
+
+				if (zero_copy) {
+#ifdef RTE_MBUF_SCATTER_GATHER
+					RTE_LOG(ERR, CONFIG, "Before running zero copy vhost APP, please disable RTE_MBUF_SCATTER_GATHER\n"
+							"in config file and then rebuild DPDK core lib! \n"
+							"Otherwise please disable zero copy flag in command line!\n");
+					return -1;
+#endif
+				}
+			}
+
+			/* Specify the descriptor number on RX. */
+			if (!strncmp(long_option[option_index].name, "rx-desc-num", MAX_LONG_OPT_SZ)) {
+				ret = parse_num_opt(optarg, MAX_RING_DESC);
+				if ((ret == -1) || (!POWEROF2(ret))) {
+					RTE_LOG(INFO, CONFIG, "Invalid argument for rx-desc-num [0-N], power of 2 required.\n");
+					us_vhost_usage(prgname);
+					return -1;
+				} else {
+					num_rx_descriptor = ret;
+				}
+			}
+
+			/* Specify the descriptor number on TX. */
+			if (!strncmp(long_option[option_index].name, "tx-desc-num", MAX_LONG_OPT_SZ)) {
+				ret = parse_num_opt(optarg, MAX_RING_DESC);
+				if ((ret == -1) || (!POWEROF2(ret))) {
+					RTE_LOG(INFO, CONFIG, "Invalid argument for tx-desc-num [0-N], power of 2 required.\n");
+					us_vhost_usage(prgname);
+					return -1;
+				} else {
+					num_tx_descriptor = ret;
+				}
+			}
+
 			break;
 
 			/* Invalid option - print options. */
@@ -620,6 +762,12 @@ us_vhost_parse_args(int argc, char **argv)
 		return -1;
 	}
 
+	if ((zero_copy == 1) && (vm2vm_mode == VM2VM_SOFTWARE)) {
+		RTE_LOG(INFO, PORT, "Vhost zero copy doesn't support software vm2vm,"
+				"please specify 'vm2vm 2' to use hardware vm2vm.\n");
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -701,6 +849,39 @@ gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa)
 }
 
 /*
+ * Function to convert guest physical addresses to vhost physical addresses. This
+ * is used to convert virtio buffer addresses.
+ */
+static inline uint64_t __attribute__((always_inline))
+gpa_to_hpa(struct virtio_net *dev, uint64_t guest_pa, uint32_t buf_len, hpa_type *addr_type)
+{
+	struct virtio_memory_regions_hpa *region;
+	uint32_t regionidx;
+	uint64_t vhost_pa = 0;
+
+	*addr_type = PHYS_ADDR_INVALID;
+
+	for (regionidx = 0; regionidx < dev->mem->nregions_hpa; regionidx++) {
+		region = &dev->mem->regions_hpa[regionidx];
+		if ((guest_pa >= region->guest_phys_address) &&
+			(guest_pa <= region->guest_phys_address_end)) {
+			vhost_pa = region->host_phys_addr_offset + guest_pa;
+			if (likely((guest_pa + buf_len - 1) <= region->guest_phys_address_end)) {
+				*addr_type = PHYS_ADDR_CONTINUOUS;
+			} else {
+				*addr_type = PHYS_ADDR_CROSS_SUBREG;
+			}
+			break;
+		}
+	}
+
+	LOG_DEBUG(DATA, "(%"PRIu64") GPA %p| HPA %p\n",
+		dev->device_fh, (void*)(uintptr_t)guest_pa, (void*)(uintptr_t)vhost_pa);
+
+	return vhost_pa;
+}
+
+/*
  * This function adds buffers to the virtio devices RX virtqueue. Buffers can
  * be received from the physical port or from another virtio device. A packet
  * count is returned to indicate the number of packets that were succesfully
@@ -894,7 +1075,6 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)
 
 	/* vlan_tag currently uses the device_id. */
 	dev->vlan_tag = vlan_tags[dev->device_fh];
-	dev->vmdq_rx_q = dev->device_fh * (num_queues/num_devices);
 
 	/* Print out VMDQ registration info. */
 	RTE_LOG(INFO, DATA, "(%"PRIu64") MAC_ADDRESS %02x:%02x:%02x:%02x:%02x:%02x and VLAN_TAG %d registered\n",
@@ -991,8 +1171,8 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)
 				/*send the packet to the local virtio device*/
 				ret = virtio_dev_rx(dev_ll->dev, &m, 1);
 				if (enable_stats) {
-					rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_total, 1);
-					rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx, ret);
+					rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_total_atomic, 1);
+					rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_atomic, ret);
 					dev_statistics[dev->device_fh].tx_total++;
 					dev_statistics[dev->device_fh].tx += ret;
 				}
@@ -1017,14 +1197,36 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *
 	struct vlan_ethhdr *vlan_hdr;
 	struct rte_mbuf **m_table;
 	struct rte_mbuf *mbuf;
-	unsigned len, ret;
+	unsigned len, ret, offset = 0;
 	const uint16_t lcore_id = rte_lcore_id();
+	struct virtio_net_data_ll *dev_ll = ll_root_used;
+	struct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;
 
 	/*check if destination is local VM*/
-	if (enable_vm2vm && (virtio_tx_local(dev, m) == 0)) {
+	if ((vm2vm_mode == VM2VM_SOFTWARE) && (virtio_tx_local(dev, m) == 0)) {
 		return;
 	}
 
+	if (vm2vm_mode == VM2VM_HARDWARE) {
+		while (dev_ll != NULL) {
+			if ((dev_ll->dev->ready == DEVICE_RX) && ether_addr_cmp(&(pkt_hdr->d_addr),
+				 &dev_ll->dev->mac_address)) {
+				/* Drop the packet if the TX packet is destined for the TX device. */
+				if (dev_ll->dev->device_fh == dev->device_fh) {
+					LOG_DEBUG(DATA, "(%"PRIu64") TX: Source and destination MAC addresses are the same. Dropping packet.\n",
+								dev_ll->dev->device_fh);
+					return ;
+				}
+				offset = 4;
+				vlan_tag = (uint16_t)vlan_tags[(uint16_t)dev_ll->dev->device_fh];
+				LOG_DEBUG(DATA, "(%"PRIu64") TX: pkt to local VM device id:(%"PRIu64") vlan tag: %d.\n", 
+					dev->device_fh, dev_ll->dev->device_fh, vlan_tag);
+				break;
+			}
+			dev_ll = dev_ll->next;
+		}
+	}
+
 	LOG_DEBUG(DATA, "(%"PRIu64") TX: MAC address is external\n", dev->device_fh);
 
 	/*Add packet to the port tx queue*/
@@ -1038,7 +1240,7 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *
 		return;
 	}
 
-	mbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN;
+	mbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN + offset;
 	mbuf->pkt.pkt_len = mbuf->pkt.data_len;
 
 	/* Copy ethernet header to mbuf. */
@@ -1262,8 +1464,8 @@ switch_worker(__attribute__((unused)) void *arg)
 				if (rx_count) {
 					ret_count = virtio_dev_rx(dev, pkts_burst, rx_count);
 					if (enable_stats) {
-						rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_total, rx_count);
-						rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx, ret_count);
+						rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_total_atomic, rx_count);
+						rte_atomic64_add(&dev_statistics[dev_ll->dev->device_fh].rx_atomic, ret_count);
 					}
 					while (likely(rx_count)) {
 						rx_count--;
@@ -1286,172 +1488,888 @@ switch_worker(__attribute__((unused)) void *arg)
 }
 
 /*
- * Add an entry to a used linked list. A free entry must first be found in the free linked list
- * using get_data_ll_free_entry();
+ * This function gets available ring number for zero copy rx. Only one thread will call
+ * this funciton for a paticular virtio device, so, it is designed as non-thread-safe function.
  */
-static void
-add_data_ll_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev)
+static inline uint32_t __attribute__((always_inline))
+get_available_ring_num_zcp(struct virtio_net *dev)
 {
-	struct virtio_net_data_ll *ll = *ll_root_addr;
+	struct vhost_virtqueue *vq = dev->virtqueue[VIRTIO_RXQ];
+	uint16_t avail_idx;
 
-	/* Set next as NULL and use a compiler barrier to avoid reordering. */
-	ll_dev->next = NULL;
-	rte_compiler_barrier();
+	avail_idx = *((volatile uint16_t *)&vq->avail->idx);
+	return (uint32_t)(avail_idx - vq->last_used_idx_res);
+}
 
-	/* If ll == NULL then this is the first device. */
-	if (ll) {
-		/* Increment to the tail of the linked list. */
-		while ((ll->next != NULL) )
-			ll = ll->next;
+/*
+ * This function gets available ring index for zero copy rx, it will retry 'burst_rx_retry_num' times
+ * till it get enough ring index. Only one thread will call this funciton for a paticular virtio device,
+ * so, it is designed as non-thread-safe function.
+ */
+static inline uint32_t __attribute__((always_inline))
+get_available_ring_index_zcp(struct virtio_net *dev, uint16_t* res_base_idx, uint32_t count)
+{
+	struct vhost_virtqueue *vq = dev->virtqueue[VIRTIO_RXQ];
+	uint16_t avail_idx;
+	uint32_t retry = 0;
+	uint16_t free_entries;
 
-		ll->next = ll_dev;
-	} else {
-		*ll_root_addr = ll_dev;
+	*res_base_idx = vq->last_used_idx_res;
+	avail_idx = *((volatile uint16_t *)&vq->avail->idx);
+	free_entries = (avail_idx - *res_base_idx);
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in get_available_ring_index_zcp: avail idx: %d, "
+			"res base idx:%d, free entries:%d\n",
+			dev->device_fh, avail_idx, *res_base_idx, free_entries);
+
+	/* If retry is enabled and the queue is full then we wait and retry to avoid packet loss. */
+	if (enable_retry && unlikely(count > free_entries)) {
+		for (retry = 0; retry < burst_rx_retry_num; retry++) {
+			rte_delay_us(burst_rx_delay_time);
+			avail_idx = *((volatile uint16_t *)&vq->avail->idx);
+			free_entries = (avail_idx - *res_base_idx);
+			if (count <= free_entries)
+				break;
+		}
+	}
+
+	/*check that we have enough buffers*/
+	if (unlikely(count > free_entries))
+		count = free_entries;
+
+	if (unlikely(count == 0)) {
+		LOG_DEBUG(DATA, "(%"PRIu64") Fail in get_available_ring_index_zcp: "
+				"avail idx: %d, res base idx:%d, free entries:%d\n",
+				dev->device_fh, avail_idx, *res_base_idx, free_entries);
+		return 0;
 	}
+
+	vq->last_used_idx_res = *res_base_idx + count;
+
+	return count;
 }
 
 /*
- * Remove an entry from a used linked list. The entry must then be added to the free linked list
- * using put_data_ll_free_entry().
+ * This function put descriptor back to used list.
  */
-static void
-rm_data_ll_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev, struct virtio_net_data_ll *ll_dev_last)
+static inline void __attribute__((always_inline))
+put_desc_to_used_list_zcp(struct vhost_virtqueue *vq, uint16_t desc_idx)
 {
-	struct virtio_net_data_ll *ll = *ll_root_addr;
-	
-	if (unlikely((ll == NULL) || (ll_dev == NULL)))
-		return;
+	uint16_t res_cur_idx = vq->last_used_idx;
+	vq->used->ring[res_cur_idx & (vq->size - 1)].id = (uint32_t)desc_idx;
+	vq->used->ring[res_cur_idx & (vq->size - 1)].len = 0;
+	rte_compiler_barrier();
+	*(volatile uint16_t *)&vq->used->idx += 1;
+	vq->last_used_idx += 1;
 
-	if (ll_dev == ll)
-		*ll_root_addr = ll_dev->next;
-	else
-		if (likely(ll_dev_last != NULL))
-			ll_dev_last->next = ll_dev->next;
-		else
-			RTE_LOG(ERR, CONFIG, "Remove entry form ll failed.\n");
+	/* Kick the guest if necessary. */
+	if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+		eventfd_write((int)vq->kickfd, 1);
 }
 
 /*
- * Find and return an entry from the free linked list.
+ * This function get available descriptor from vitio vring and un-attached mbuf from
+ * vpool->ring, and then attach them together. It needs adjust the offset for buff_addr and
+ * phys_addr accroding to PMD implementation, otherwise the frame data may be put to wrong
+ * location in mbuf.
  */
-static struct virtio_net_data_ll *
-get_data_ll_free_entry(struct virtio_net_data_ll **ll_root_addr)
+static inline void __attribute__((always_inline))
+attach_rxmbuf_zcp(struct virtio_net *dev)
 {
-	struct virtio_net_data_ll *ll_free = *ll_root_addr;
-	struct virtio_net_data_ll *ll_dev;
+	uint16_t res_base_idx, desc_idx;
+	uint64_t buff_addr, phys_addr;
+	struct vhost_virtqueue *vq;
+	struct vring_desc *desc;
+	struct rte_mbuf * mbuf = NULL;
+	struct vpool * vpool;
+	hpa_type addr_type;
 
-	if (ll_free == NULL)
-		return NULL;
+	vpool = &vpool_array[dev->vmdq_rx_q];
+	vq = dev->virtqueue[VIRTIO_RXQ];
 
-	ll_dev = ll_free;
-	*ll_root_addr = ll_free->next;
+	do {
+		if (unlikely(get_available_ring_index_zcp(dev, &res_base_idx, 1) != 1))
+			return ;
+		desc_idx = vq->avail->ring[(res_base_idx) & (vq->size - 1)];
+
+		desc = &vq->desc[desc_idx];
+		if (desc->flags & VRING_DESC_F_NEXT) {
+			desc = &vq->desc[desc->next];
+			buff_addr = gpa_to_vva(dev, desc->addr);
+			phys_addr = gpa_to_hpa(dev, desc->addr, desc->len, &addr_type);
+		} else {
+			buff_addr = gpa_to_vva(dev, desc->addr + vq->vhost_hlen);
+			phys_addr = gpa_to_hpa(dev, desc->addr + vq->vhost_hlen, desc->len, &addr_type);
+		}
 
-	return ll_dev;
-}
+		if (unlikely(addr_type == PHYS_ADDR_INVALID)) {
+			RTE_LOG(ERR, DATA, "(%"PRIu64") Invalid frame buffer address found"
+					" when attaching RX frame buffer address!\n",
+					 dev->device_fh);
+			put_desc_to_used_list_zcp(vq, desc_idx);
+			continue;
+		}
 
-/*
- * Place an entry back on to the free linked list.
- */
-static void
-put_data_ll_free_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev)
-{
-	struct virtio_net_data_ll *ll_free = *ll_root_addr;
+		/* Check if the frame buffer address from guest crosses sub-region or not. */
+		if (unlikely(addr_type == PHYS_ADDR_CROSS_SUBREG)) {
+			RTE_LOG(ERR, DATA, "(%"PRIu64") Frame buffer address cross sub-regioin found"
+					" when attaching RX frame buffer address!\n",
+					 dev->device_fh);
+			put_desc_to_used_list_zcp(vq, desc_idx);
+			continue;
+		}
+	} while (unlikely(phys_addr == 0));
 
-	if (ll_dev == NULL)
+	rte_ring_sc_dequeue(vpool->ring, (void**)&mbuf);
+	if (unlikely(mbuf == NULL)) {
+		LOG_DEBUG(DATA, "(%"PRIu64") in attach_rxmbuf_zcp: ring_sc_dequeue fail.\n",
+			dev->device_fh);
+		put_desc_to_used_list_zcp(vq, desc_idx);
 		return;
+	}
 
-	ll_dev->next = ll_free;
-	*ll_root_addr = ll_dev;
+	if (unlikely(vpool->buf_size > desc->len)) {
+		LOG_DEBUG(DATA, "(%"PRIu64") in attach_rxmbuf_zcp: frame buffer length(%d) of "
+				"descriptor idx: %d less than room size required: %d\n",
+				dev->device_fh, desc->len, desc_idx, vpool->buf_size);
+		put_desc_to_used_list_zcp(vq, desc_idx);
+		rte_ring_sp_enqueue(vpool->ring, (void*)mbuf);
+		return;
+	}
+
+	mbuf->buf_addr = (void*)(uintptr_t)(buff_addr - RTE_PKTMBUF_HEADROOM);
+	mbuf->pkt.data = (void*)(uintptr_t)(buff_addr);
+	mbuf->buf_physaddr = phys_addr - RTE_PKTMBUF_HEADROOM;
+	mbuf->pkt.data_len = desc->len;
+	MBUF_HEADROOM_UINT32(mbuf) = (uint32_t)desc_idx;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in attach_rxmbuf_zcp: res base idx:%d, descriptor idx:%d\n",
+		dev->device_fh, res_base_idx, desc_idx);
+
+	__rte_mbuf_raw_free(mbuf);
+
+	return;
 }
 
 /*
- * Creates a linked list of a given size.
+ * Detach an attched packet mbuf -
+ *  - restore original mbuf address and length values.
+ *  - reset pktmbuf data and data_len to their default values.
+ *  All other fields of the given packet mbuf will be left intact.
+ *
+ * @param m
+ *   The attached packet mbuf.
  */
-static struct virtio_net_data_ll *
-alloc_data_ll(uint32_t size)
+static inline void pktmbuf_detach_zcp(struct rte_mbuf *m)
 {
-	struct virtio_net_data_ll *ll_new;
-	uint32_t i;
+	const struct rte_mempool *mp = m->pool;
+	void *buf = RTE_MBUF_TO_BADDR(m);
+	uint32_t buf_ofs;
+	uint32_t buf_len = mp->elt_size - sizeof(*m);
+	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof (*m);
 
-	/* Malloc and then chain the linked list. */
-	ll_new = malloc(size * sizeof(struct virtio_net_data_ll));
-	if (ll_new == NULL) {
-		RTE_LOG(ERR, CONFIG, "Failed to allocate memory for ll_new.\n");
-		return NULL;
-	}
+	m->buf_addr = buf;
+	m->buf_len = (uint16_t)buf_len;
 
-	for (i = 0; i < size - 1; i++) {
-		ll_new[i].dev = NULL;
-		ll_new[i].next = &ll_new[i+1];
-	}
-	ll_new[i].next = NULL;
+	buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
+			RTE_PKTMBUF_HEADROOM : m->buf_len;
+	m->pkt.data = (char*) m->buf_addr + buf_ofs;
 
-	return (ll_new);
+	m->pkt.data_len = 0;
 }
 
 /*
- * Create the main linked list along with each individual cores linked list. A used and a free list
- * are created to manage entries.
+ * This function is called after packets have been transimited. It fetchs mbuf from vpool->pool,
+ * detached it and put into vpool->ring. It also update the used index and kick the guest if necessary.
  */
-static int
-init_data_ll (void)
+static inline uint32_t __attribute__((always_inline))
+txmbuf_clean_zcp(struct virtio_net* dev, struct vpool* vpool)
 {
-	int lcore;
+	struct rte_mbuf * mbuf;
+	struct vhost_virtqueue *vq = dev->virtqueue[VIRTIO_TXQ];
+	uint32_t used_idx = vq->last_used_idx & (vq->size - 1);
+	uint32_t index = 0;
+	uint32_t mbuf_count = rte_mempool_count(vpool->pool);
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: mbuf count in mempool before clean is: %d \n",
+		dev->device_fh, mbuf_count);
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: mbuf count in  ring before clean  is : %d \n",
+		dev->device_fh, rte_ring_count(vpool->ring));
+
+	for (index = 0; index < mbuf_count; index ++) {
+		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
+		if (likely(RTE_MBUF_INDIRECT(mbuf)))
+			pktmbuf_detach_zcp(mbuf);
+		rte_ring_sp_enqueue(vpool->ring, mbuf);
 
-	RTE_LCORE_FOREACH_SLAVE(lcore) {
-		lcore_info[lcore].lcore_ll = malloc(sizeof(struct lcore_ll_info));
-		if (lcore_info[lcore].lcore_ll == NULL) {
-			RTE_LOG(ERR, CONFIG, "Failed to allocate memory for lcore_ll.\n");
-			return -1;
-		}
+		/* Update used index buffer information. */
+		vq->used->ring[used_idx].id = MBUF_HEADROOM_UINT32(mbuf);
+		vq->used->ring[used_idx].len = 0;
 
-		lcore_info[lcore].lcore_ll->device_num = 0;
-		lcore_info[lcore].lcore_ll->dev_removal_flag = ACK_DEV_REMOVAL;
-		lcore_info[lcore].lcore_ll->ll_root_used = NULL;
-		if (num_devices % num_switching_cores)
-			lcore_info[lcore].lcore_ll->ll_root_free = alloc_data_ll((num_devices / num_switching_cores) + 1);
-		else
-			lcore_info[lcore].lcore_ll->ll_root_free = alloc_data_ll(num_devices / num_switching_cores);
+		used_idx = (used_idx + 1) & (vq->size - 1);
 	}
 
-	/* Allocate devices up to a maximum of MAX_DEVICES. */
-	ll_root_free = alloc_data_ll(MIN((num_devices), MAX_DEVICES));
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: mbuf count in mempool after clean is: %d \n",
+		dev->device_fh, rte_mempool_count(vpool->pool));
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: mbuf count in  ring after clean  is : %d \n",
+		dev->device_fh, rte_ring_count(vpool->ring));
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: before updated vq->last_used_idx:%d\n",
+		dev->device_fh, vq->last_used_idx);
+
+	vq->last_used_idx += mbuf_count;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in txmbuf_clean_zcp: after  updated vq->last_used_idx:%d\n",
+		dev->device_fh, vq->last_used_idx);
+
+	rte_compiler_barrier();
+
+	*(volatile uint16_t *)&vq->used->idx += mbuf_count;
+
+	/* Kick guest if required. */
+	if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+		eventfd_write((int)vq->kickfd, 1);
 
 	return 0;
 }
 
 /*
- * Set virtqueue flags so that we do not receive interrupts.
+ * This function is called when a virtio device is destroy. It fetchs mbuf from vpool->pool,
+ * and detached it, and put into vpool->ring.
  */
-static void
-set_irq_status (struct virtio_net *dev)
+static void mbuf_destroy_zcp(struct vpool* vpool)
 {
-	dev->virtqueue[VIRTIO_RXQ]->used->flags = VRING_USED_F_NO_NOTIFY;
-	dev->virtqueue[VIRTIO_TXQ]->used->flags = VRING_USED_F_NO_NOTIFY;
+	struct rte_mbuf * mbuf = NULL;
+	uint32_t index, mbuf_count = rte_mempool_count(vpool->pool);
+
+	LOG_DEBUG(CONFIG, "in mbuf_destroy_zcp: mbuf count in mempool before mbuf_destroy_zcp is: %d \n",
+		mbuf_count);
+	LOG_DEBUG(CONFIG, "in mbuf_destroy_zcp: mbuf count in  ring before mbuf_destroy_zcp  is : %d \n",
+		rte_ring_count(vpool->ring));
+
+	for (index = 0; index < mbuf_count; index ++) {
+		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
+		if (likely(mbuf != NULL)) {
+			if (likely(RTE_MBUF_INDIRECT(mbuf)))
+				pktmbuf_detach_zcp(mbuf);
+			rte_ring_sp_enqueue(vpool->ring, (void*)mbuf);
+		}
+	}
+
+	LOG_DEBUG(CONFIG, "in mbuf_destroy_zcp: mbuf count in mempool after mbuf_destroy_zcp is: %d \n",
+		rte_mempool_count(vpool->pool));
+	LOG_DEBUG(CONFIG, "in mbuf_destroy_zcp: mbuf count in  ring after mbuf_destroy_zcp  is : %d \n",
+		rte_ring_count(vpool->ring));
 }
 
 /*
- * Remove a device from the specific data core linked list and from the main linked list. Synchonization 
- * occurs through the use of the lcore dev_removal_flag. Device is made volatile here to avoid re-ordering 
- * of dev->remove=1 which can cause an infinite loop in the rte_pause loop.
+ * This function update the use flag and counter.
  */
-static void
-destroy_device (volatile struct virtio_net *dev)
+static inline uint32_t __attribute__((always_inline))
+virtio_dev_rx_zcp(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)
 {
-	struct virtio_net_data_ll *ll_lcore_dev_cur;
-	struct virtio_net_data_ll *ll_main_dev_cur;
-	struct virtio_net_data_ll *ll_lcore_dev_last = NULL;
-	struct virtio_net_data_ll *ll_main_dev_last = NULL;
-	int lcore;
-
-	dev->flags &= ~VIRTIO_DEV_RUNNING;
+	struct vhost_virtqueue *vq;
+	struct vring_desc *desc;
+	struct rte_mbuf *buff;
+	/* The virtio_hdr is initialised to 0. */
+	struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0,0,0,0,0,0},0};
+	uint64_t buff_hdr_addr = 0;
+	uint32_t head[MAX_PKT_BURST], packet_len = 0;
+	uint32_t head_idx, packet_success = 0;
+	uint16_t res_cur_idx;
 
-	/*set the remove flag. */
-	dev->remove = 1;
+	LOG_DEBUG(DATA, "(%"PRIu64") virtio_dev_rx()\n", dev->device_fh);
 
-	while(dev->ready != DEVICE_SAFE_REMOVE) {
-		rte_pause();
-	}
+	if (count == 0)
+		return 0;
+
+	vq = dev->virtqueue[VIRTIO_RXQ];
+	count = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count;
+
+	res_cur_idx = vq->last_used_idx;
+	LOG_DEBUG(DATA, "(%"PRIu64") Current Index %d| End Index %d\n",
+		dev->device_fh, res_cur_idx, res_cur_idx + count);
+
+	/* Retrieve all of the head indexes first to avoid caching issues. */
+	for (head_idx = 0; head_idx < count; head_idx++)
+		head[head_idx] = MBUF_HEADROOM_UINT32(pkts[head_idx]);
+
+	/*Prefetch descriptor index. */
+	rte_prefetch0(&vq->desc[head[packet_success]]);
+
+	while (packet_success != count) {
+		/* Get descriptor from available ring */
+		desc = &vq->desc[head[packet_success]];
+
+		buff = pkts[packet_success];
+		LOG_DEBUG(DATA, "(%"PRIu64") in dev_rx_zcp: update the used idx for pkt[%d] descriptor idx: %d\n", 
+			dev->device_fh, packet_success, MBUF_HEADROOM_UINT32(buff));
+
+		PRINT_PACKET(dev, (uintptr_t)(((uint64_t)(uintptr_t)buff->buf_addr) + RTE_PKTMBUF_HEADROOM), 
+				rte_pktmbuf_data_len(buff), 0);
+
+		/* Buffer address translation for virtio header. */
+		buff_hdr_addr = gpa_to_vva(dev, desc->addr);
+		packet_len = rte_pktmbuf_data_len(buff) + vq->vhost_hlen;
+
+		/*
+		 * If the descriptors are chained the header and data are placed in
+		 * separate buffers.
+		 */
+		if (desc->flags & VRING_DESC_F_NEXT) {
+			desc->len = vq->vhost_hlen;
+			desc = &vq->desc[desc->next];
+			desc->len = rte_pktmbuf_data_len(buff);
+		} else {
+			desc->len = packet_len;
+		}
+
+		/* Update used ring with desc information */
+		vq->used->ring[res_cur_idx & (vq->size - 1)].id = head[packet_success];
+		vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
+		res_cur_idx++;
+		packet_success++;
+
+		/* A header is required per buffer. */
+		rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void*)&virtio_hdr, vq->vhost_hlen);
+
+		PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1);
+
+		if (likely(packet_success < count)) {
+			/* Prefetch descriptor index. */
+			rte_prefetch0(&vq->desc[head[packet_success]]);
+		}
+	}
+
+	rte_compiler_barrier();
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in dev_rx_zcp: before update used idx: vq.last_used_idx: %d, vq->used->idx: %d\n",
+		dev->device_fh, vq->last_used_idx, vq->used->idx);
+
+	*(volatile uint16_t *)&vq->used->idx += count;
+	vq->last_used_idx += count;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in dev_rx_zcp: after  update used idx: vq.last_used_idx: %d, vq->used->idx: %d\n",
+		dev->device_fh, vq->last_used_idx, vq->used->idx);
+
+	/* Kick the guest if necessary. */
+	if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+		eventfd_write((int)vq->kickfd, 1);
+
+	return count;
+}
+
+/*
+ * This function routes the TX packet to the correct interface. This may be a local device
+ * or the physical port.
+ */
+static inline void __attribute__((always_inline))
+virtio_tx_route_zcp(struct virtio_net* dev, struct rte_mbuf *m, uint32_t desc_idx, uint8_t need_copy)
+{
+	struct mbuf_table *tx_q;
+	struct rte_mbuf **m_table;
+	struct rte_mbuf *mbuf = NULL;
+	unsigned len, ret, offset = 0;
+	struct vpool * vpool;
+	struct virtio_net_data_ll *dev_ll = ll_root_used;
+	struct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;
+	uint16_t vlan_tag = (uint16_t)vlan_tags[(uint16_t)dev->device_fh];
+
+	/*Add packet to the port tx queue*/
+	tx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];
+	len = tx_q->len;
+
+	/* Allocate an mbuf and populate the structure. */
+	vpool = &vpool_array[MAX_QUEUES + (uint16_t)dev->vmdq_rx_q];
+	rte_ring_sc_dequeue(vpool->ring, (void**)&mbuf);
+	if (unlikely(mbuf == NULL)) {
+		struct vhost_virtqueue *vq = dev->virtqueue[VIRTIO_TXQ];
+		RTE_LOG(ERR, DATA, "(%"PRIu64") Failed to allocate memory for mbuf.\n", dev->device_fh);
+		put_desc_to_used_list_zcp(vq, desc_idx);
+		return;
+	}
+
+	if (vm2vm_mode == VM2VM_HARDWARE) {
+		/* Avoid using a vlan tag from any vm for external pkt, such as
+		* vlan_tags[dev->device_fh], oterwise, it conflicts when pool selection,
+		* MAC address determines it as an external pkt which should go to network,
+		* while vlan tag determine it as a vm2vm pkt should forward to another vm.
+		* Hardware confuse such a ambiguous situation, so pkt will lost.
+		*/
+		vlan_tag = external_pkt_default_vlan_tag;
+		while (dev_ll != NULL) {
+			if (likely(dev_ll->dev->ready == DEVICE_RX) &&
+				ether_addr_cmp(&(pkt_hdr->d_addr), &dev_ll->dev->mac_address)) {
+
+				/* Drop the packet if the TX packet is destined for the TX device. */
+				if (unlikely(dev_ll->dev->device_fh == dev->device_fh)) {
+					LOG_DEBUG(DATA, "(%"PRIu64") TX: Source and destination MAC addresses are the same. Dropping packet.\n",
+							dev_ll->dev->device_fh);
+					MBUF_HEADROOM_UINT32(mbuf) = (uint32_t)desc_idx;
+					__rte_mbuf_raw_free(mbuf);
+					return ;
+				}
+
+				/* Packet length offset 4 bytes for HW vlan strip when L2 switch back. */
+				offset = 4;
+				vlan_tag = (uint16_t)vlan_tags[(uint16_t)dev_ll->dev->device_fh];
+
+				LOG_DEBUG(DATA, "(%"PRIu64") TX: pkt to local VM device id:(%"PRIu64") vlan tag: %d.\n", 
+					dev->device_fh, dev_ll->dev->device_fh, vlan_tag);
+
+				break;
+			}
+			dev_ll = dev_ll->next;
+		}
+	}
+
+	mbuf->pkt.nb_segs = m->pkt.nb_segs;
+	mbuf->pkt.next = m->pkt.next;
+	mbuf->pkt.data_len = m->pkt.data_len + offset;
+	mbuf->pkt.pkt_len = mbuf->pkt.data_len;
+	if (unlikely(need_copy)) {
+		/* Copy the packet contents to the mbuf. */
+		rte_memcpy((void*) ((uint8_t*)mbuf->pkt.data),
+			(const void*) ((uint8_t*)m->pkt.data), m->pkt.data_len);
+	} else {
+		mbuf->pkt.data = m->pkt.data;
+		mbuf->buf_physaddr = m->buf_physaddr;
+		mbuf->buf_addr = m->buf_addr;
+	}
+	mbuf->ol_flags = PKT_TX_VLAN_PKT;
+	mbuf->pkt.vlan_macip.f.vlan_tci = vlan_tag;
+	mbuf->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+	mbuf->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+	MBUF_HEADROOM_UINT32(mbuf) = (uint32_t)desc_idx;
+
+	tx_q->m_table[len] = mbuf;
+	len++;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") in tx_route_zcp: pkt: nb_seg: %d, next:%s\n",
+		dev->device_fh, mbuf->pkt.nb_segs,(mbuf->pkt.next == NULL)?"null":"non-null");
+
+	if (enable_stats) {
+		dev_statistics[dev->device_fh].tx_total++;
+		dev_statistics[dev->device_fh].tx++;
+	}
+
+	if (unlikely(len == MAX_PKT_BURST)) {
+		m_table = (struct rte_mbuf **)tx_q->m_table;
+		ret = rte_eth_tx_burst(ports[0], (uint16_t)tx_q->txq_id, m_table, (uint16_t) len);
+
+		/* Free any buffers not handled by TX and update the port stats. */
+		if (unlikely(ret < len)) {
+			do {
+				rte_pktmbuf_free(m_table[ret]);
+			} while (++ret < len);
+		}
+
+		len = 0;
+		txmbuf_clean_zcp(dev, vpool);
+	}
+
+	tx_q->len = len;
+
+	return;
+}
+
+/*
+ * This function TX all available packets in virtio TX queue for one virtio-net device.
+ * If it is first packet, it learns MAC address and setup VMDQ.
+ */
+static inline void __attribute__((always_inline))
+virtio_dev_tx_zcp(struct virtio_net* dev)
+{
+	struct rte_mbuf m;
+	struct vhost_virtqueue *vq;
+	struct vring_desc *desc;
+	uint64_t buff_addr = 0, phys_addr;
+	uint32_t head[MAX_PKT_BURST];
+	uint32_t i;
+	uint16_t free_entries, packet_success = 0;
+	uint16_t avail_idx;
+	uint8_t need_copy = 0;
+	hpa_type addr_type;
+
+	vq = dev->virtqueue[VIRTIO_TXQ];
+	avail_idx =  *((volatile uint16_t *)&vq->avail->idx);
+
+	/* If there are no available buffers then return. */
+	if (vq->last_used_idx_res == avail_idx)
+		return;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") virtio_dev_tx()\n", dev->device_fh);
+
+	/* Prefetch available ring to retrieve head indexes. */
+	rte_prefetch0(&vq->avail->ring[vq->last_used_idx_res & (vq->size - 1)]);
+
+	/* Get the number of free entries in the ring */
+	free_entries = (avail_idx - vq->last_used_idx_res);
+
+	/* Limit to MAX_PKT_BURST. */
+	free_entries = (free_entries > MAX_PKT_BURST)? MAX_PKT_BURST : free_entries;
+
+	LOG_DEBUG(DATA, "(%"PRIu64") Buffers available %d\n", dev->device_fh, free_entries);
+
+	/* Retrieve all of the head indexes first to avoid caching issues. */
+	for (i = 0; i < free_entries; i++)
+		head[i] = vq->avail->ring[(vq->last_used_idx_res + i) & (vq->size - 1)];
+
+	vq->last_used_idx_res += free_entries;
+
+	/* Prefetch descriptor index. */
+	rte_prefetch0(&vq->desc[head[packet_success]]);
+	rte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]);
+
+	while (packet_success < free_entries) {
+		desc = &vq->desc[head[packet_success]];
+
+		/* Discard first buffer as it is the virtio header */
+		desc = &vq->desc[desc->next];
+
+		/* Buffer address translation. */
+		buff_addr = gpa_to_vva(dev, desc->addr);
+		phys_addr = gpa_to_hpa(dev, desc->addr, desc->len, &addr_type);
+
+		if (likely(packet_success < (free_entries - 1))) {
+			/* Prefetch descriptor index. */
+			rte_prefetch0(&vq->desc[head[packet_success + 1]]);
+		}
+
+		if (unlikely(addr_type == PHYS_ADDR_INVALID)) {
+			RTE_LOG(ERR, DATA, "(%"PRIu64") Invalid frame buffer address found when TX packets!\n",
+					 dev->device_fh);
+			packet_success ++;
+			continue;
+		}
+
+		/* Prefetch buffer address. */
+		rte_prefetch0((void*)(uintptr_t)buff_addr);
+
+		/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */
+		m.pkt.data_len = desc->len;
+		m.pkt.nb_segs = 1;
+		m.pkt.next = NULL;
+		m.pkt.data = (void*)(uintptr_t)buff_addr;
+		m.buf_addr = m.pkt.data;
+		m.buf_physaddr = phys_addr;
+
+		/* Check if the frame buffer address from guest crosses sub-region or not. */
+		if (unlikely(addr_type == PHYS_ADDR_CROSS_SUBREG)) {
+			RTE_LOG(ERR, DATA, "(%"PRIu64") Frame buffer address cross sub-regioin found"
+				" when attaching TX frame buffer address!\n",
+				 dev->device_fh);
+			need_copy = 1;
+		} else
+			need_copy = 0;
+
+		PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
+
+		/* If this is the first received packet we need to learn the MAC and setup VMDQ */
+		if (unlikely(dev->ready == DEVICE_MAC_LEARNING)) {
+			if (dev->remove || (link_vmdq(dev, &m) == -1)) {
+				/*discard frame if device is scheduled for removal or a duplicate MAC address is found. */
+				packet_success += free_entries;
+				vq->last_used_idx += packet_success;
+				break;
+			}
+		}
+
+		virtio_tx_route_zcp(dev, &m, head[packet_success], need_copy);
+		packet_success++;
+	}
+}
+
+/*
+ * This function is called by each data core. It handles all RX/TX registered with the
+ * core. For TX the specific lcore linked list is used. For RX, MAC addresses are compared
+ * with all devices in the main linked list.
+ */
+static int
+switch_worker_zcp(__attribute__((unused)) void *arg)
+{
+	struct virtio_net *dev = NULL;
+	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+	struct virtio_net_data_ll *dev_ll;
+	struct mbuf_table *tx_q;
+	volatile struct lcore_ll_info *lcore_ll;
+	const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US;
+	uint64_t prev_tsc, diff_tsc, cur_tsc, ret_count = 0;
+	unsigned ret;
+	const uint16_t lcore_id = rte_lcore_id();
+	uint16_t count_in_ring, rx_count = 0;
+
+	RTE_LOG(INFO, DATA, "Procesing on Core %u started \n", lcore_id);
+
+	lcore_ll = lcore_info[lcore_id].lcore_ll;
+	prev_tsc = 0;
+
+	while(1) {
+		cur_tsc = rte_rdtsc();
+
+		/* TX burst queue drain */
+		diff_tsc = cur_tsc - prev_tsc;
+		if (unlikely(diff_tsc > drain_tsc)) {
+			/* Get mbuf from vpool.pool and detach mbuf and put back into vpool.ring. */
+			dev_ll = lcore_ll->ll_root_used;
+			while ((dev_ll != NULL) && (dev_ll->dev != NULL)) {
+				/* Get virtio device ID */
+				dev = dev_ll->dev;
+
+				if (likely(!dev->remove)) {
+					tx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];
+					if (tx_q->len) {
+						LOG_DEBUG(DATA, "TX queue drained after timeout with burst size %u \n",
+								 tx_q->len);
+
+						/* Tx any packets in the queue */
+						ret = rte_eth_tx_burst(ports[0], (uint16_t)tx_q->txq_id,
+									   (struct rte_mbuf **)tx_q->m_table,
+									   (uint16_t)tx_q->len);
+						if (unlikely(ret < tx_q->len)) {
+							do {
+								rte_pktmbuf_free(tx_q->m_table[ret]);
+							} while (++ret < tx_q->len);
+						}
+						tx_q->len = 0;
+
+						txmbuf_clean_zcp(dev, &vpool_array[MAX_QUEUES+dev->vmdq_rx_q]);
+					}
+				}
+				dev_ll = dev_ll->next;
+			}
+			prev_tsc = cur_tsc;
+		}
+
+		rte_prefetch0(lcore_ll->ll_root_used);
+
+		/*
+		 * Inform the configuration core that we have exited the linked list and that no devices are
+		 * in use if requested.
+		 */
+		if (lcore_ll->dev_removal_flag == REQUEST_DEV_REMOVAL)
+			lcore_ll->dev_removal_flag = ACK_DEV_REMOVAL;
+
+		/* Process devices */
+		dev_ll = lcore_ll->ll_root_used;
+
+		while ((dev_ll != NULL) && (dev_ll->dev != NULL)) {
+			dev = dev_ll->dev;
+			if (unlikely(dev->remove)) {
+				dev_ll = dev_ll->next;
+				unlink_vmdq(dev);
+				dev->ready = DEVICE_SAFE_REMOVE;
+				continue;
+			}
+
+			if (likely(dev->ready == DEVICE_RX)) {
+				uint32_t index = dev->vmdq_rx_q;
+				uint16_t i;
+				count_in_ring = rte_ring_count(vpool_array[index].ring);
+				uint16_t free_entries = (uint16_t)get_available_ring_num_zcp(dev) ;
+
+				/* Attach all mbufs in vpool.ring and put back into vpool.pool. */
+				for (i = 0; i < RTE_MIN(free_entries, RTE_MIN(count_in_ring, MAX_PKT_BURST)); i++) {
+					attach_rxmbuf_zcp(dev);
+				}
+
+				/* Handle guest RX */
+				rx_count = rte_eth_rx_burst(ports[0],
+					(uint16_t)dev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);
+
+				if (rx_count) {
+					ret_count = virtio_dev_rx_zcp(dev, pkts_burst, rx_count);
+					if (enable_stats) {
+						dev_statistics[dev->device_fh].rx_total += rx_count;
+						dev_statistics[dev->device_fh].rx += ret_count;
+					}
+					while (likely(rx_count)) {
+						rx_count--;
+						pktmbuf_detach_zcp(pkts_burst[rx_count]);
+						rte_ring_sp_enqueue(vpool_array[index].ring, (void*)pkts_burst[rx_count]);
+					}
+				}
+			}
+
+			if (likely(!dev->remove))
+				/* Handle guest TX */
+				virtio_dev_tx_zcp(dev);
+
+			/* Move to the next device in the list */
+			dev_ll = dev_ll->next;
+		}
+	}
+
+	return 0;
+}
+
+
+/*
+ * Add an entry to a used linked list. A free entry must first be found in the free linked list
+ * using get_data_ll_free_entry();
+ */
+static void
+add_data_ll_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev)
+{
+	struct virtio_net_data_ll *ll = *ll_root_addr;
+
+	/* Set next as NULL and use a compiler barrier to avoid reordering. */
+	ll_dev->next = NULL;
+	rte_compiler_barrier();
+
+	/* If ll == NULL then this is the first device. */
+	if (ll) {
+		/* Increment to the tail of the linked list. */
+		while ((ll->next != NULL) )
+			ll = ll->next;
+
+		ll->next = ll_dev;
+	} else {
+		*ll_root_addr = ll_dev;
+	}
+}
+
+/*
+ * Remove an entry from a used linked list. The entry must then be added to the free linked list
+ * using put_data_ll_free_entry().
+ */
+static void
+rm_data_ll_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev, struct virtio_net_data_ll *ll_dev_last)
+{
+	struct virtio_net_data_ll *ll = *ll_root_addr;
+
+	if (unlikely((ll == NULL) || (ll_dev == NULL)))
+		return;
+
+	if (ll_dev == ll)
+		*ll_root_addr = ll_dev->next;
+	else
+		if (likely(ll_dev_last != NULL))
+			ll_dev_last->next = ll_dev->next;
+		else
+			RTE_LOG(ERR, CONFIG, "Remove entry form ll failed.\n");
+}
+
+/*
+ * Find and return an entry from the free linked list.
+ */
+static struct virtio_net_data_ll *
+get_data_ll_free_entry(struct virtio_net_data_ll **ll_root_addr)
+{
+	struct virtio_net_data_ll *ll_free = *ll_root_addr;
+	struct virtio_net_data_ll *ll_dev;
+
+	if (ll_free == NULL)
+		return NULL;
+
+	ll_dev = ll_free;
+	*ll_root_addr = ll_free->next;
+
+	return ll_dev;
+}
+
+/*
+ * Place an entry back on to the free linked list.
+ */
+static void
+put_data_ll_free_entry(struct virtio_net_data_ll **ll_root_addr, struct virtio_net_data_ll *ll_dev)
+{
+	struct virtio_net_data_ll *ll_free = *ll_root_addr;
+
+	if (ll_dev == NULL)
+		return;
+
+	ll_dev->next = ll_free;
+	*ll_root_addr = ll_dev;
+}
+
+/*
+ * Creates a linked list of a given size.
+ */
+static struct virtio_net_data_ll *
+alloc_data_ll(uint32_t size)
+{
+	struct virtio_net_data_ll *ll_new;
+	uint32_t i;
+
+	/* Malloc and then chain the linked list. */
+	ll_new = malloc(size * sizeof(struct virtio_net_data_ll));
+	if (ll_new == NULL) {
+		RTE_LOG(ERR, CONFIG, "Failed to allocate memory for ll_new.\n");
+		return NULL;
+	}
+
+	for (i = 0; i < size - 1; i++) {
+		ll_new[i].dev = NULL;
+		ll_new[i].next = &ll_new[i+1];
+	}
+	ll_new[i].next = NULL;
+
+	return (ll_new);
+}
+
+/*
+ * Create the main linked list along with each individual cores linked list. A used and a free list
+ * are created to manage entries.
+ */
+static int
+init_data_ll (void)
+{
+	int lcore;
+
+	RTE_LCORE_FOREACH_SLAVE(lcore) {
+		lcore_info[lcore].lcore_ll = malloc(sizeof(struct lcore_ll_info));
+		if (lcore_info[lcore].lcore_ll == NULL) {
+			RTE_LOG(ERR, CONFIG, "Failed to allocate memory for lcore_ll.\n");
+			return -1;
+		}
+
+		lcore_info[lcore].lcore_ll->device_num = 0;
+		lcore_info[lcore].lcore_ll->dev_removal_flag = ACK_DEV_REMOVAL;
+		lcore_info[lcore].lcore_ll->ll_root_used = NULL;
+		if (num_devices % num_switching_cores)
+			lcore_info[lcore].lcore_ll->ll_root_free = alloc_data_ll((num_devices / num_switching_cores) + 1);
+		else
+			lcore_info[lcore].lcore_ll->ll_root_free = alloc_data_ll(num_devices / num_switching_cores);
+	}
+
+	/* Allocate devices up to a maximum of MAX_DEVICES. */
+	ll_root_free = alloc_data_ll(MIN((num_devices), MAX_DEVICES));
+
+	return 0;
+}
+
+/*
+ * Set virtqueue flags so that we do not receive interrupts.
+ */
+static void
+set_irq_status (struct virtio_net *dev)
+{
+	dev->virtqueue[VIRTIO_RXQ]->used->flags = VRING_USED_F_NO_NOTIFY;
+	dev->virtqueue[VIRTIO_TXQ]->used->flags = VRING_USED_F_NO_NOTIFY;
+}
+
+/*
+ * Remove a device from the specific data core linked list and from the main linked list. Synchonization 
+ * occurs through the use of the lcore dev_removal_flag. Device is made volatile here to avoid re-ordering 
+ * of dev->remove=1 which can cause an infinite loop in the rte_pause loop.
+ */
+static void
+destroy_device (volatile struct virtio_net *dev)
+{
+	struct virtio_net_data_ll *ll_lcore_dev_cur;
+	struct virtio_net_data_ll *ll_main_dev_cur;
+	struct virtio_net_data_ll *ll_lcore_dev_last = NULL;
+	struct virtio_net_data_ll *ll_main_dev_last = NULL;
+	int lcore;
+
+	dev->flags &= ~VIRTIO_DEV_RUNNING;
+
+	/*set the remove flag. */
+	dev->remove = 1;
+
+	while(dev->ready != DEVICE_SAFE_REMOVE) {
+		rte_pause();
+	}
 
 	/* Search for entry to be removed from lcore ll */
 	ll_lcore_dev_cur = lcore_info[dev->coreid].lcore_ll->ll_root_used;
@@ -1465,7 +2383,8 @@ destroy_device (volatile struct virtio_net *dev)
 	}
 
 	if (ll_lcore_dev_cur == NULL) {
-		RTE_LOG(ERR, CONFIG, "Failed to find the dev to be destroy.\n");
+		RTE_LOG(ERR, CONFIG, "(%"PRIu64") Failed to find the dev to be destroy.\n",
+			dev->device_fh);
 		return;
 	}
 
@@ -1509,6 +2428,36 @@ destroy_device (volatile struct virtio_net *dev)
 	lcore_info[ll_lcore_dev_cur->dev->coreid].lcore_ll->device_num--;
 	
 	RTE_LOG(INFO, DATA, "(%"PRIu64") Device has been removed from data core\n", dev->device_fh);
+
+	if (zero_copy) {
+		struct vpool * vpool = &vpool_array[dev->vmdq_rx_q];
+
+		/* Stop the RX queue. */
+		if (rte_eth_dev_rx_queue_stop(ports[0], dev->vmdq_rx_q) != 0) {
+			LOG_DEBUG(CONFIG, "(%"PRIu64") In destroy_device: Failed to stop rx queue:%d\n", 
+				dev->device_fh, dev->vmdq_rx_q);
+		}
+
+		LOG_DEBUG(CONFIG, "(%"PRIu64") in destroy_device: Start put mbuf in mempool back to ring for RX queue: %d\n",
+			dev->device_fh, dev->vmdq_rx_q);
+
+		mbuf_destroy_zcp(vpool);
+
+		/* Stop the TX queue. */
+		if (rte_eth_dev_tx_queue_stop(ports[0], dev->vmdq_rx_q) != 0) {
+			LOG_DEBUG(CONFIG, "(%"PRIu64") In destroy_device: Failed to stop tx queue:%d\n", 
+				dev->device_fh, dev->vmdq_rx_q);
+		}
+
+		vpool = &vpool_array[dev->vmdq_rx_q + MAX_QUEUES];
+
+		LOG_DEBUG(CONFIG, "(%"PRIu64") destroy_device: Start put mbuf in mempool back to ring"
+					" for TX queue: %d, dev:(%"PRIu64")\n",
+					dev->device_fh, (dev->vmdq_rx_q + MAX_QUEUES), dev->device_fh);
+
+		mbuf_destroy_zcp(vpool);
+	}
+
 }
 
 /*
@@ -1532,6 +2481,60 @@ new_device (struct virtio_net *dev)
 	}
 	ll_dev->dev = dev;
 	add_data_ll_entry(&ll_root_used, ll_dev);
+	ll_dev->dev->vmdq_rx_q = ll_dev->dev->device_fh * (num_queues/num_devices);
+
+	if (zero_copy) {
+		uint32_t index = ll_dev->dev->vmdq_rx_q;
+		uint32_t count_in_ring, i;
+		struct mbuf_table *tx_q;
+
+		count_in_ring = rte_ring_count(vpool_array[index].ring);
+
+		LOG_DEBUG(CONFIG, "(%"PRIu64") in new_device: mbuf count in mempool before attach is: %d \n", 
+			dev->device_fh, rte_mempool_count(vpool_array[index].pool));
+		LOG_DEBUG(CONFIG, "(%"PRIu64") in new_device: mbuf count in  ring before attach  is : %d \n", 
+			dev->device_fh, count_in_ring);
+
+		/* Attach all mbufs in vpool.ring and put back into vpool.pool. */
+		for (i = 0; i < count_in_ring; i++) {
+			attach_rxmbuf_zcp(dev);
+		}
+
+		LOG_DEBUG(CONFIG, "(%"PRIu64") in new_device: mbuf count in mempool after attach is: %d \n", 
+			dev->device_fh, rte_mempool_count(vpool_array[index].pool));
+		LOG_DEBUG(CONFIG, "(%"PRIu64") in new_device: mbuf count in  ring after attach  is : %d \n", 
+			dev->device_fh, rte_ring_count(vpool_array[index].ring));
+
+		tx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];
+		tx_q->txq_id = dev->vmdq_rx_q;
+
+		if (rte_eth_dev_tx_queue_start(ports[0], dev->vmdq_rx_q) != 0) {
+			struct vpool * vpool = &vpool_array[dev->vmdq_rx_q];
+
+			LOG_DEBUG(CONFIG, "(%"PRIu64") In new_device: Failed to start tx queue:%d\n",
+			dev->device_fh, dev->vmdq_rx_q);
+
+			mbuf_destroy_zcp(vpool);
+			return -1;
+		}
+
+		if (rte_eth_dev_rx_queue_start(ports[0], dev->vmdq_rx_q) != 0) {
+			struct vpool * vpool = &vpool_array[dev->vmdq_rx_q];
+
+			LOG_DEBUG(CONFIG, "(%"PRIu64") In new_device: Failed to start rx queue:%d\n",
+			dev->device_fh, dev->vmdq_rx_q);
+
+			/* Stop the TX queue. */
+			if (rte_eth_dev_tx_queue_stop(ports[0], dev->vmdq_rx_q) != 0) {
+				LOG_DEBUG(CONFIG, "(%"PRIu64") In new_device: Failed to stop tx queue:%d\n", 
+					dev->device_fh, dev->vmdq_rx_q);
+			}
+
+			mbuf_destroy_zcp(vpool);
+			return -1;
+		}
+
+	}
 
 	/*reset ready flag*/
 	dev->ready = DEVICE_MAC_LEARNING;
@@ -1549,6 +2552,7 @@ new_device (struct virtio_net *dev)
 	ll_dev = get_data_ll_free_entry(&lcore_info[ll_dev->dev->coreid].lcore_ll->ll_root_free);
 	if (ll_dev == NULL) {
 		RTE_LOG(INFO, DATA, "(%"PRIu64") Failed to add device to data core\n", dev->device_fh);
+		dev->ready = DEVICE_SAFE_REMOVE;
 		destroy_device(dev);
 		return -1;
 	}
@@ -1606,8 +2610,13 @@ print_stats(void)
 			tx_total = dev_statistics[device_fh].tx_total;
 			tx = dev_statistics[device_fh].tx;
 			tx_dropped = tx_total - tx;
-			rx_total = rte_atomic64_read(&dev_statistics[device_fh].rx_total);
-			rx = rte_atomic64_read(&dev_statistics[device_fh].rx);
+			if (zero_copy == 0) {
+				rx_total = rte_atomic64_read(&dev_statistics[device_fh].rx_total_atomic);
+				rx = rte_atomic64_read(&dev_statistics[device_fh].rx_atomic);
+			} else {
+				rx_total = dev_statistics[device_fh].rx_total;
+				rx = dev_statistics[device_fh].rx;
+			}
 			rx_dropped = rx_total - rx;
 
 			printf("\nStatistics for device %"PRIu32" ------------------------------"
@@ -1631,6 +2640,33 @@ print_stats(void)
 	}
 }
 
+static void
+setup_mempool_tbl(int socket, uint32_t index, char* pool_name, char* ring_name, uint32_t nb_mbuf)
+{
+	uint16_t roomsize = VIRTIO_DESCRIPTOR_LEN_ZCP + RTE_PKTMBUF_HEADROOM;
+	if ((vpool_array[index].pool = rte_mempool_create(pool_name, nb_mbuf, MBUF_SIZE_ZCP,
+		MBUF_CACHE_SIZE_ZCP, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, (void*)(uintptr_t)roomsize, rte_pktmbuf_init, NULL,
+		socket, 0)) != NULL) {
+		vpool_array[index].ring = rte_ring_create(ring_name, rte_align32pow2(nb_mbuf + 1),
+							socket, RING_F_SP_ENQ | RING_F_SC_DEQ);
+		if (likely(vpool_array[index].ring != NULL)) {
+			LOG_DEBUG(CONFIG, "in setup_mempool_tbl: mbuf count in mempool is: %d \n",
+				rte_mempool_count(vpool_array[index].pool));
+			LOG_DEBUG(CONFIG, "in setup_mempool_tbl: mbuf count in  ring   is: %d \n",
+				rte_ring_count(vpool_array[index].ring));
+		} else {
+			rte_exit(EXIT_FAILURE, "ring_create(%s) failed", ring_name);
+		}
+
+		/* Need consider head room. */
+		vpool_array[index].buf_size = roomsize - RTE_PKTMBUF_HEADROOM;
+	} else {
+		rte_exit(EXIT_FAILURE, "mempool_create(%s) failed", pool_name);
+	}
+}
+
+
 /*
  * Main function, does initialisation and calls the per-lcore functions. The CUSE
  * device is also registered here to handle the IOCTLs.
@@ -1638,11 +2674,11 @@ print_stats(void)
 int
 MAIN(int argc, char *argv[])
 {
-	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *mbuf_pool = NULL;
 	unsigned lcore_id, core_id = 0;
 	unsigned nb_ports, valid_num_ports;
 	int ret;
-	uint8_t portid;
+	uint8_t portid, queue_id = 0;
 	static pthread_t tid;
 
 	/* init EAL */
@@ -1687,16 +2723,59 @@ MAIN(int argc, char *argv[])
 		return -1;
 	}
 
-	/* Create the mbuf pool. */
-	mbuf_pool = rte_mempool_create("MBUF_POOL", NUM_MBUFS_PER_PORT * valid_num_ports,
-				       MBUF_SIZE, MBUF_CACHE_SIZE,
-				       sizeof(struct rte_pktmbuf_pool_private),
-				       rte_pktmbuf_pool_init, NULL,
-				       rte_pktmbuf_init, NULL,
-				       rte_socket_id(), 0);
-	if (mbuf_pool == NULL)
-		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+	if (zero_copy == 0) {
+		/* Create the mbuf pool. */
+		mbuf_pool = rte_mempool_create("MBUF_POOL", NUM_MBUFS_PER_PORT * valid_num_ports,
+						   MBUF_SIZE, MBUF_CACHE_SIZE,
+						   sizeof(struct rte_pktmbuf_pool_private),
+						   rte_pktmbuf_pool_init, NULL,
+						   rte_pktmbuf_init, NULL,
+						   rte_socket_id(), 0);
+		if (mbuf_pool == NULL)
+			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
+
+		for (queue_id = 0; queue_id < MAX_QUEUES + 1;queue_id ++) {
+			vpool_array[queue_id].pool = mbuf_pool;
+		}
+
+		if (vm2vm_mode == VM2VM_HARDWARE) {
+			/* Enable VT loop back to let L2 switch to do it. */
+			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.enable_loop_back = 1;
+			LOG_DEBUG(CONFIG, "Enable loop back for L2 switch in vmdq.\n");
+		}
+	} else {
+		uint32_t nb_mbuf;
+		char pool_name[RTE_MEMPOOL_NAMESIZE];
+		char ring_name[RTE_MEMPOOL_NAMESIZE];
+
+		rx_conf_default.start_rx_per_q = (uint8_t)zero_copy;
+		rx_conf_default.rx_drop_en = 0;
+		tx_conf_default.start_tx_per_q = (uint8_t)zero_copy;
+		nb_mbuf = num_rx_descriptor + num_switching_cores * MBUF_CACHE_SIZE_ZCP
+				+ num_switching_cores * MAX_PKT_BURST;
+
+		for (queue_id = 0; queue_id < MAX_QUEUES; queue_id ++) {
+			rte_snprintf(pool_name, sizeof(pool_name), "rxmbuf_pool_%u", queue_id);
+			rte_snprintf(ring_name, sizeof(ring_name), "rxmbuf_ring_%u", queue_id);
+			setup_mempool_tbl(rte_socket_id(), queue_id, pool_name, ring_name, nb_mbuf);
+		}
+
+		nb_mbuf = num_tx_descriptor + num_switching_cores * MBUF_CACHE_SIZE_ZCP
+				+ num_switching_cores * MAX_PKT_BURST;
 
+		for (queue_id = 0; queue_id < MAX_QUEUES; queue_id ++) {
+			rte_snprintf(pool_name, sizeof(pool_name), "txmbuf_pool_%u", queue_id);
+			rte_snprintf(ring_name, sizeof(ring_name), "txmbuf_ring_%u", queue_id);
+			setup_mempool_tbl(rte_socket_id(), (queue_id + MAX_QUEUES),
+				pool_name, ring_name, nb_mbuf);
+		}
+
+		if (vm2vm_mode == VM2VM_HARDWARE) {
+			/* Enable VT loop back to let L2 switch to do it. */
+			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.enable_loop_back = 1;
+			LOG_DEBUG(CONFIG, "Enable loop back for L2 switch in vmdq.\n");
+		}
+	}
 	/* Set log level. */
 	rte_set_log_level(LOG_LEVEL);
 
@@ -1707,7 +2786,7 @@ MAIN(int argc, char *argv[])
 			RTE_LOG(INFO, PORT, "Skipping disabled port %d\n", portid);
 			continue;
 		}
-		if (port_init(portid, mbuf_pool) != 0)
+		if (port_init(portid) != 0)
 			rte_exit(EXIT_FAILURE, "Cannot initialize network ports\n");
 	}
 
@@ -1723,8 +2802,31 @@ MAIN(int argc, char *argv[])
 		pthread_create(&tid, NULL, (void*)print_stats, NULL );
 
 	/* Launch all data cores. */
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
-		rte_eal_remote_launch(switch_worker, mbuf_pool, lcore_id);
+	if (zero_copy == 0) {
+		RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+			rte_eal_remote_launch(switch_worker, mbuf_pool, lcore_id);
+		}
+	} else {
+		uint32_t count_in_mempool, index, i;
+		for (index = 0; index < 2*MAX_QUEUES; index ++) {
+			/* For all RX and TX queues. */
+			count_in_mempool = rte_mempool_count(vpool_array[index].pool);
+
+			/* Transfer all un-attached mbufs from vpool.pool to vpoo.ring. */
+			for (i = 0; i < count_in_mempool; i++) {
+				struct rte_mbuf* mbuf = __rte_mbuf_raw_alloc(vpool_array[index].pool);
+				rte_ring_sp_enqueue(vpool_array[index].ring, (void*)mbuf);
+			}
+
+			LOG_DEBUG(CONFIG, "in MAIN: mbuf count in mempool at initial is: %d \n",
+				count_in_mempool);
+			LOG_DEBUG(CONFIG, "in MAIN: mbuf count in  ring at initial  is : %d \n",
+				rte_ring_count(vpool_array[index].ring));
+		}
+
+		RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+			rte_eal_remote_launch(switch_worker_zcp, NULL, lcore_id);
+		}
 	}
 
 	/* Register CUSE device to handle IOCTLs. */
diff --git a/examples/vhost/virtio-net.c b/examples/vhost/virtio-net.c
index b7b39be..bccf5f3 100644
--- a/examples/vhost/virtio-net.c
+++ b/examples/vhost/virtio-net.c
@@ -46,6 +46,7 @@
 #include <rte_ethdev.h>
 #include <rte_log.h>
 #include <rte_string_fns.h>
+#include <rte_memory.h>
 
 #include "main.h"
 #include "virtio-net.h"
@@ -326,7 +327,7 @@ add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev)
 			while ((ll_dev->next != NULL) && (ll_dev->dev.device_fh == (ll_dev->next->dev.device_fh - 1)))
 				ll_dev = ll_dev->next;
 
-			new_ll_dev->dev.device_fh++;
+			new_ll_dev->dev.device_fh = ll_dev->dev.device_fh + 1;
 			new_ll_dev->next = ll_dev->next;
 			ll_dev->next = new_ll_dev;
 		}
@@ -346,6 +347,8 @@ cleanup_device(struct virtio_net *dev)
 	/* Unmap QEMU memory file if mapped. */
 	if (dev->mem) {
 		munmap((void*)(uintptr_t)dev->mem->mapped_address, (size_t)dev->mem->mapped_size);
+		if (dev->mem->regions_hpa)
+			free(dev->mem->regions_hpa);
 		free(dev->mem);
 	}
 
@@ -590,6 +593,97 @@ set_features(struct vhost_device_ctx ctx, uint64_t *pu)
 }
 
 /*
+ * Calculate the region count of physical continous regions for one particular region of
+ * whose vhost virtual address is continous. The particular region start from vva_start,
+ * with size of 'size' in argument.
+ */
+static uint32_t check_hpa_regions(uint64_t vva_start, uint64_t size)
+{
+	uint32_t i, nregions = 0, page_size = PAGE_SIZE;
+	uint64_t cur_phys_addr = 0, next_phys_addr = 0;
+	if (vva_start % page_size) {
+		LOG_DEBUG(CONFIG, "in check_countinous: vva start(%p) mod page_size(%d) has remainder\n", 
+				(void*)(uintptr_t)vva_start, page_size);
+		return 0;
+	}
+	if (size % page_size) {
+		LOG_DEBUG(CONFIG, "in check_countinous: size((%"PRIu64")) mod page_size(%d) has remainder\n", size, page_size);
+		return 0;
+	}
+	for (i = 0; i < size - page_size; i = i + page_size) {
+		if (((cur_phys_addr = rte_mem_virt2phy((void*)(uintptr_t)(vva_start + i))) + page_size) != 
+			(next_phys_addr = rte_mem_virt2phy((void*)(uintptr_t)(vva_start + i + page_size)))) {
+			++nregions;
+			LOG_DEBUG(CONFIG, "in check_continuous: hva addr:(%p) is not continuous with hva addr:(%p), diff:%d \n",
+				(void*)(uintptr_t)(vva_start + (uint64_t)i),
+				(void*)(uintptr_t)(vva_start + (uint64_t)i + page_size), page_size);
+			LOG_DEBUG(CONFIG, "in check_continuous: hpa addr:(%p) is not continuous with hpa addr:(%p), diff:(%"PRIu64") \n",
+				(void*)(uintptr_t)cur_phys_addr,
+				(void*)(uintptr_t)next_phys_addr, (next_phys_addr-cur_phys_addr));
+		}
+	}
+	return nregions;
+}
+
+/*
+ * Divide each region whose vhost virtual address is continous into a few sub-regions,
+ * make sure the physical address within each sub-region are continous.
+ * And fill offset(to GPA) and size etc. information of each sub-region into regions_hpa.
+ */
+static uint32_t fill_hpa_memory_regions(void* memory)
+{
+	uint32_t regionidx, regionidx_hpa = 0, i, k, page_size = PAGE_SIZE;
+	uint64_t cur_phys_addr = 0, next_phys_addr = 0, vva_start;
+	struct virtio_memory *virtio_memory = (struct virtio_memory*)memory;
+	struct virtio_memory_regions_hpa * mem_region_hpa = virtio_memory->regions_hpa;
+
+	if (mem_region_hpa == NULL)
+		return 0;
+
+	for (regionidx = 0; regionidx < virtio_memory->nregions; regionidx++) {
+		vva_start = virtio_memory->regions[regionidx].guest_phys_address + virtio_memory->regions[regionidx].address_offset;
+		mem_region_hpa[regionidx_hpa].guest_phys_address = virtio_memory->regions[regionidx].guest_phys_address;
+		mem_region_hpa[regionidx_hpa].host_phys_addr_offset =
+			rte_mem_virt2phy((void*)(uintptr_t)(vva_start)) - mem_region_hpa[regionidx_hpa].guest_phys_address;
+		LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr start[%d]:(%p)\n", regionidx_hpa,
+				(void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].guest_phys_address));
+		LOG_DEBUG(CONFIG, "in fill_hpa_regions: host  phys addr start[%d]:(%p)\n", regionidx_hpa, 
+				(void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].host_phys_addr_offset));
+		for (i = 0, k = 0; i < virtio_memory->regions[regionidx].memory_size - page_size; i += page_size) {
+			if (((cur_phys_addr = rte_mem_virt2phy((void*)(uintptr_t)(vva_start + i))) + page_size) != 
+				(next_phys_addr = rte_mem_virt2phy((void*)(uintptr_t)(vva_start + i + page_size)))) {
+				mem_region_hpa[regionidx_hpa].guest_phys_address_end =
+					mem_region_hpa[regionidx_hpa].guest_phys_address + k + page_size;
+				mem_region_hpa[regionidx_hpa].memory_size = k + page_size;
+				LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr end  [%d]:(%p)\n", 
+					regionidx_hpa, (void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].guest_phys_address_end));
+				LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr size [%d]:(%p)\n", 
+					regionidx_hpa, (void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].memory_size));
+				mem_region_hpa[regionidx_hpa + 1].guest_phys_address = mem_region_hpa[regionidx_hpa].guest_phys_address_end;
+				++regionidx_hpa;
+				mem_region_hpa[regionidx_hpa].host_phys_addr_offset =
+					next_phys_addr - mem_region_hpa[regionidx_hpa].guest_phys_address;
+				LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr start[%d]:(%p)\n", 
+					regionidx_hpa, (void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].guest_phys_address));
+				LOG_DEBUG(CONFIG, "in fill_hpa_regions: host  phys addr start[%d]:(%p)\n", 
+					regionidx_hpa, (void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].host_phys_addr_offset));
+				k = 0;
+			} else {
+				k += page_size;
+			}
+		}
+		mem_region_hpa[regionidx_hpa].guest_phys_address_end = mem_region_hpa[regionidx_hpa].guest_phys_address + k + page_size;
+		mem_region_hpa[regionidx_hpa].memory_size = k + page_size;
+		LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr end  [%d]:(%p)\n", regionidx_hpa, 
+				(void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].guest_phys_address_end));
+		LOG_DEBUG(CONFIG, "in fill_hpa_regions: guest phys addr size [%d]:(%p)\n", regionidx_hpa, 
+				(void*)(uintptr_t)(mem_region_hpa[regionidx_hpa].memory_size));
+		++regionidx_hpa;
+	}
+	return regionidx_hpa;
+}
+
+/*
  * Called from CUSE IOCTL: VHOST_SET_MEM_TABLE
  * This function creates and populates the memory structure for the device. This includes
  * storing offsets used to translate buffer addresses.
@@ -681,16 +775,36 @@ set_mem_table(struct vhost_device_ctx ctx, const void *mem_regions_addr, uint32_
 		}
 	}
 	mem->nregions = valid_regions;
+	mem->nregions_hpa = mem->nregions;
 	dev->mem = mem;
 
 	/*
 	 * Calculate the address offset for each region. This offset is used to identify the vhost virtual address
 	 * corresponding to a QEMU guest physical address.
 	 */
-	for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++)
+	for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {
 		dev->mem->regions[regionidx].address_offset = dev->mem->regions[regionidx].userspace_address - dev->mem->base_address
 			+ dev->mem->mapped_address - dev->mem->regions[regionidx].guest_phys_address;
 
+		dev->mem->nregions_hpa += check_hpa_regions(dev->mem->regions[regionidx].guest_phys_address 
+			+ dev->mem->regions[regionidx].address_offset, dev->mem->regions[regionidx].memory_size);
+	}
+	if (dev->mem->regions_hpa != NULL) {
+		free(dev->mem->regions_hpa);
+		dev->mem->regions_hpa = NULL;
+	}
+
+	dev->mem->regions_hpa = (struct virtio_memory_regions_hpa*) calloc(1,
+		(sizeof(struct virtio_memory_regions_hpa) * dev->mem->nregions_hpa));
+	if (dev->mem->regions_hpa == NULL) {
+		RTE_LOG(ERR, CONFIG, "(%"PRIu64") Failed to allocate memory for dev->mem->regions_hpa.\n", dev->device_fh);
+		return -1;
+	}
+	if (fill_hpa_memory_regions((void*)dev->mem) != dev->mem->nregions_hpa) {
+		RTE_LOG(ERR, CONFIG, "in set_mem_table: hpa memory regions number mismatch: [%d]\n", dev->mem->nregions_hpa);
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -918,7 +1032,7 @@ set_backend(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
 	if (!(dev->flags & VIRTIO_DEV_RUNNING)) {
 		if (((int)dev->virtqueue[VIRTIO_TXQ]->backend != VIRTIO_DEV_STOPPED) &&
 			((int)dev->virtqueue[VIRTIO_RXQ]->backend != VIRTIO_DEV_STOPPED))
-			notify_ops->new_device(dev);
+			return notify_ops->new_device(dev);
 	/* Otherwise we remove it. */
 	} else
 		if (file->fd == VIRTIO_DEV_STOPPED) {
diff --git a/examples/vhost/virtio-net.h b/examples/vhost/virtio-net.h
index 3e677e7..bddf9f1 100644
--- a/examples/vhost/virtio-net.h
+++ b/examples/vhost/virtio-net.h
@@ -40,6 +40,7 @@
 /* Backend value set by guest. */
 #define VIRTIO_DEV_STOPPED -1
 
+#define PAGE_SIZE   4096
 
 /* Enum for virtqueue management. */
 enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
@@ -100,6 +101,16 @@ struct virtio_memory_regions {
 };
 
 /*
+ * Information relating to memory regions including offsets to addresses in host physical space.
+ */
+struct virtio_memory_regions_hpa {
+	uint64_t	guest_phys_address;		/* Base guest physical address of region. */
+	uint64_t	guest_phys_address_end;		/* End guest physical address of region. */
+	uint64_t	memory_size;			/* Size of region. */
+	uint64_t	host_phys_addr_offset;		/* Offset of region for gpa to hpa translation. */
+};
+
+/*
  * Memory structure includes region and mapping information.
  */
 struct virtio_memory {
@@ -107,7 +118,9 @@ struct virtio_memory {
 	uint64_t			mapped_address;			/* Mapped address of memory file base in our applications memory space. */
 	uint64_t			mapped_size;			/* Total size of memory file. */
 	uint32_t			nregions;				/* Number of memory regions. */
-	struct virtio_memory_regions 	regions[0];	/* Memory region information. */
+	uint32_t			nregions_hpa;			/* Number of memory regions for gpa to hpa translation. */
+	struct virtio_memory_regions_hpa 	*regions_hpa;	/* Memory region information for gpa to hpa translation. */
+	struct virtio_memory_regions 		regions[0];		/* Memory region information. */
 };
 
 /*
-- 
1.9.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost
  2014-05-20  5:25 [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Ouyang Changchun
                   ` (2 preceding siblings ...)
  2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 3/3] examples/vhost: Support user space vhost zero copy Ouyang Changchun
@ 2014-05-27 23:01 ` Thomas Monjalon
  2014-05-28  0:53   ` Ouyang, Changchun
  3 siblings, 1 reply; 6+ messages in thread
From: Thomas Monjalon @ 2014-05-27 23:01 UTC (permalink / raw)
  To: Ouyang Changchun; +Cc: dev

Hi,

checkpatch.pl is reporting some errors and I think some of them should avoided.
Please check it.

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost
  2014-05-27 23:01 ` [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Thomas Monjalon
@ 2014-05-28  0:53   ` Ouyang, Changchun
  0 siblings, 0 replies; 6+ messages in thread
From: Ouyang, Changchun @ 2014-05-28  0:53 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Yes I will send out a patch v3 to replace the patch v2.
Thanks
Changchun

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Wednesday, May 28, 2014 7:02 AM
To: Ouyang, Changchun
Cc: dev@dpdk.org
Subject: Re: [PATCH v2 0/3] Support zero copy RX/TX in user space vhost

Hi,

checkpatch.pl is reporting some errors and I think some of them should avoided.
Please check it.

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-05-28  0:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-20  5:25 [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Ouyang Changchun
2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX Ouyang Changchun
2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 2/3] ixgbe: Implement queue start and stop functionality in IXGBE PMD Ouyang Changchun
2014-05-20  5:25 ` [dpdk-dev] [PATCH v2 3/3] examples/vhost: Support user space vhost zero copy Ouyang Changchun
2014-05-27 23:01 ` [dpdk-dev] [PATCH v2 0/3] Support zero copy RX/TX in user space vhost Thomas Monjalon
2014-05-28  0:53   ` Ouyang, Changchun

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).