DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement
@ 2014-11-05  1:49 Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 1/6] vmxnet3: Fix VLAN Rx stripping Yong Wang
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

This patch series include various fixes and improvement to the
vmxnet3 pmd driver.

V2:
- Add more commit descriptions
- Add a new patch that improve tx performance for small packet

Yong Wang (6):
  vmxnet3: Fix VLAN Rx stripping
  vmxnet3: Add VLAN Tx offload
  vmxnet3: Fix dev stop/restart bug
  vmxnet3: Add rx pkt check offloads
  vmxnet3: Perf improvement on the rx path
  vmxnet3: Leverage data_ring on tx path

 lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c |   7 +-
 lib/librte_pmd_vmxnet3/vmxnet3_ring.h   |  13 +-
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c   | 353 +++++++++++++++++++++-----------
 3 files changed, 242 insertions(+), 131 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 1/6] vmxnet3: Fix VLAN Rx stripping
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 2/6] vmxnet3: Add VLAN Tx offload Yong Wang
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

Shouldn't reset vlan_tci to 0 if a valid VLAN tag is stripped.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 263f9ce..986e5e5 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -540,21 +540,19 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 			/* Check for hardware stripped VLAN tag */
 			if (rcd->ts) {
-
 				PMD_RX_LOG(ERR, "Received packet with vlan ID: %d.",
 					   rcd->tci);
 				rxm->ol_flags = PKT_RX_VLAN_PKT;
-
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
 				VMXNET3_ASSERT(rxm &&
 					       rte_pktmbuf_mtod(rxm, void *));
 #endif
 				/* Copy vlan tag in packet buffer */
-				rxm->vlan_tci = rte_le_to_cpu_16(
-						(uint16_t)rcd->tci);
-
-			} else
+				rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
+			} else {
 				rxm->ol_flags = 0;
+				rxm->vlan_tci = 0;
+			}
 
 			/* Initialize newly received packet buffer */
 			rxm->port = rxq->port_id;
@@ -563,11 +561,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rxm->pkt_len = (uint16_t)rcd->len;
 			rxm->data_len = (uint16_t)rcd->len;
 			rxm->port = rxq->port_id;
-			rxm->vlan_tci = 0;
 			rxm->data_off = RTE_PKTMBUF_HEADROOM;
 
 			rx_pkts[nb_rx++] = rxm;
-
 rcd_done:
 			rxq->cmd_ring[ring_idx].next2comp = idx;
 			VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 2/6] vmxnet3: Add VLAN Tx offload
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 1/6] vmxnet3: Fix VLAN Rx stripping Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 3/6] vmxnet3: Fix dev stop/restart bug Yong Wang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 986e5e5..0b6363f 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -319,6 +319,12 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			txd->cq = 1;
 			txd->eop = 1;
 
+			/* Add VLAN tag if requested */
+			if (txm->ol_flags & PKT_TX_VLAN_PKT) {
+				txd->ti = 1;
+				txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
+			}
+
 			/* Record current mbuf for freeing it later in tx complete */
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
 			VMXNET3_ASSERT(txm);
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 3/6] vmxnet3: Fix dev stop/restart bug
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 1/6] vmxnet3: Fix VLAN Rx stripping Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 2/6] vmxnet3: Add VLAN Tx offload Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 4/6] vmxnet3: Add rx pkt check offloads Yong Wang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

This change makes vmxnet3 consistent with other pmds in
terms of dev_stop behavior: rather than releasing tx/rx
rings, it only resets the ring structure and release the
pending mbufs.

Verified with various tests (test-pmd and pktgen) over
vmxnet3 that dev stop/restart works fine.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 78 ++++++++++++++++++++++++++++++++---
 1 file changed, 73 insertions(+), 5 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 0b6363f..2017d4b 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -157,7 +157,7 @@ vmxnet3_txq_dump(struct vmxnet3_tx_queue *txq)
 #endif
 
 static inline void
-vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
+vmxnet3_cmd_ring_release_mbufs(vmxnet3_cmd_ring_t *ring)
 {
 	while (ring->next2comp != ring->next2fill) {
 		/* No need to worry about tx desc ownership, device is quiesced by now. */
@@ -171,16 +171,23 @@ vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
 		}
 		vmxnet3_cmd_ring_adv_next2comp(ring);
 	}
+}
+
+static void
+vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
+{
+	vmxnet3_cmd_ring_release_mbufs(ring);
 	rte_free(ring->buf_info);
 	ring->buf_info = NULL;
 }
 
+
 void
 vmxnet3_dev_tx_queue_release(void *txq)
 {
 	vmxnet3_tx_queue_t *tq = txq;
 
-	if (txq != NULL) {
+	if (tq != NULL) {
 		/* Release the cmd_ring */
 		vmxnet3_cmd_ring_release(&tq->cmd_ring);
 	}
@@ -192,13 +199,74 @@ vmxnet3_dev_rx_queue_release(void *rxq)
 	int i;
 	vmxnet3_rx_queue_t *rq = rxq;
 
-	if (rxq != NULL) {
+	if (rq != NULL) {
 		/* Release both the cmd_rings */
 		for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++)
 			vmxnet3_cmd_ring_release(&rq->cmd_ring[i]);
 	}
 }
 
+static void
+vmxnet3_dev_tx_queue_reset(void *txq)
+{
+	vmxnet3_tx_queue_t *tq = txq;
+	struct vmxnet3_cmd_ring *ring = &tq->cmd_ring;
+	struct vmxnet3_comp_ring *comp_ring = &tq->comp_ring;
+	int size;
+
+	if (tq != NULL) {
+		/* Release the cmd_ring mbufs */
+		vmxnet3_cmd_ring_release_mbufs(&tq->cmd_ring);
+	}
+
+	/* Tx vmxnet rings structure initialization*/
+	ring->next2fill = 0;
+	ring->next2comp = 0;
+	ring->gen = VMXNET3_INIT_GEN;
+	comp_ring->next2proc = 0;
+	comp_ring->gen = VMXNET3_INIT_GEN;
+
+	size = sizeof(struct Vmxnet3_TxDesc) * ring->size;
+	size += sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size;
+
+	memset(ring->base, 0, size);
+}
+
+static void
+vmxnet3_dev_rx_queue_reset(void *rxq)
+{
+	int i;
+	vmxnet3_rx_queue_t *rq = rxq;
+	struct vmxnet3_cmd_ring *ring0, *ring1;
+	struct vmxnet3_comp_ring *comp_ring;
+	int size;
+
+	if (rq != NULL) {
+		/* Release both the cmd_rings mbufs */
+		for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++)
+			vmxnet3_cmd_ring_release_mbufs(&rq->cmd_ring[i]);
+	}
+
+	ring0 = &rq->cmd_ring[0];
+	ring1 = &rq->cmd_ring[1];
+	comp_ring = &rq->comp_ring;
+
+	/* Rx vmxnet rings structure initialization */
+	ring0->next2fill = 0;
+	ring1->next2fill = 0;
+	ring0->next2comp = 0;
+	ring1->next2comp = 0;
+	ring0->gen = VMXNET3_INIT_GEN;
+	ring1->gen = VMXNET3_INIT_GEN;
+	comp_ring->next2proc = 0;
+	comp_ring->gen = VMXNET3_INIT_GEN;
+
+	size = sizeof(struct Vmxnet3_RxDesc) * (ring0->size + ring1->size);
+	size += sizeof(struct Vmxnet3_RxCompDesc) * comp_ring->size;
+
+	memset(ring0->base, 0, size);
+}
+
 void
 vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 {
@@ -211,7 +279,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 
 		if (txq != NULL) {
 			txq->stopped = TRUE;
-			vmxnet3_dev_tx_queue_release(txq);
+			vmxnet3_dev_tx_queue_reset(txq);
 		}
 	}
 
@@ -220,7 +288,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 
 		if (rxq != NULL) {
 			rxq->stopped = TRUE;
-			vmxnet3_dev_rx_queue_release(rxq);
+			vmxnet3_dev_rx_queue_reset(rxq);
 		}
 	}
 }
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 4/6] vmxnet3: Add rx pkt check offloads
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (2 preceding siblings ...)
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 3/6] vmxnet3: Fix dev stop/restart bug Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 5/6] vmxnet3: Perf improvement on the rx path Yong Wang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

Only supports IPv4 so far.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 2017d4b..e2fb8a8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -65,6 +65,7 @@
 #include <rte_ether.h>
 #include <rte_ethdev.h>
 #include <rte_prefetch.h>
+#include <rte_ip.h>
 #include <rte_udp.h>
 #include <rte_tcp.h>
 #include <rte_sctp.h>
@@ -614,7 +615,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 			/* Check for hardware stripped VLAN tag */
 			if (rcd->ts) {
-				PMD_RX_LOG(ERR, "Received packet with vlan ID: %d.",
+				PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
 					   rcd->tci);
 				rxm->ol_flags = PKT_RX_VLAN_PKT;
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
@@ -637,6 +638,25 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rxm->port = rxq->port_id;
 			rxm->data_off = RTE_PKTMBUF_HEADROOM;
 
+			/* Check packet types, rx checksum errors, etc. Only support IPv4 so far. */
+			if (rcd->v4) {
+				struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
+				struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
+
+				if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+					rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+				else
+					rxm->ol_flags |= PKT_RX_IPV4_HDR;
+
+				if (!rcd->cnc) {
+					if (!rcd->ipc)
+						rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+
+					if ((rcd->tcp || rcd->udp) && !rcd->tuc)
+						rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				}
+			}
+
 			rx_pkts[nb_rx++] = rxm;
 rcd_done:
 			rxq->cmd_ring[ring_idx].next2comp = idx;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 5/6] vmxnet3: Perf improvement on the rx path
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (3 preceding siblings ...)
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 4/6] vmxnet3: Add rx pkt check offloads Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 6/6] vmxnet3: Leverage data_ring on tx path Yong Wang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

This patch includes two small performance optimizations
on the rx path:

(1) It adds unlikely hints on various infrequent error
paths to the compiler to make branch prediction more
efficient.

(2) It also moves a constant assignment out of the pkt
polling loop.  This saves one branching per packet.

Performance evaluation configs:
- On the DPDK-side, it's running some l3 forwarding app
inside a VM on ESXi with one core assigned for polling.
- On the client side, pktgen/dpdk is used to generate
64B tcp packets at line rate (14.8M PPS).

Performance results on a Nehalem box (4cores@2.8GHzx2)
shown below.  CPU usage is collected factoring out the
idle loop cost.
- Before the patch, ~900K PPS with 65% CPU of a core
used for DPDK.
- After the patch, only 45% of a core used, while
maintaining the same packet rate.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 242 ++++++++++++++++------------------
 1 file changed, 116 insertions(+), 126 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index e2fb8a8..4799f4d 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -451,6 +451,19 @@ vmxnet3_post_rx_bufs(vmxnet3_rx_queue_t *rxq, uint8_t ring_id)
 	uint32_t i = 0, val = 0;
 	struct vmxnet3_cmd_ring *ring = &rxq->cmd_ring[ring_id];
 
+	if (ring_id == 0) {
+		/* Usually: One HEAD type buf per packet
+		 * val = (ring->next2fill % rxq->hw->bufs_per_pkt) ?
+		 * VMXNET3_RXD_BTYPE_BODY : VMXNET3_RXD_BTYPE_HEAD;
+		 */
+
+		/* We use single packet buffer so all heads here */
+		val = VMXNET3_RXD_BTYPE_HEAD;
+	} else {
+		/* All BODY type buffers for 2nd ring */
+		val = VMXNET3_RXD_BTYPE_BODY;
+	}
+
 	while (vmxnet3_cmd_ring_desc_avail(ring) > 0) {
 		struct Vmxnet3_RxDesc *rxd;
 		struct rte_mbuf *mbuf;
@@ -458,22 +471,9 @@ vmxnet3_post_rx_bufs(vmxnet3_rx_queue_t *rxq, uint8_t ring_id)
 
 		rxd = (struct Vmxnet3_RxDesc *)(ring->base + ring->next2fill);
 
-		if (ring->rid == 0) {
-			/* Usually: One HEAD type buf per packet
-			 * val = (ring->next2fill % rxq->hw->bufs_per_pkt) ?
-			 * VMXNET3_RXD_BTYPE_BODY : VMXNET3_RXD_BTYPE_HEAD;
-			 */
-
-			/* We use single packet buffer so all heads here */
-			val = VMXNET3_RXD_BTYPE_HEAD;
-		} else {
-			/* All BODY type buffers for 2nd ring; which won't be used at all by ESXi */
-			val = VMXNET3_RXD_BTYPE_BODY;
-		}
-
 		/* Allocate blank mbuf for the current Rx Descriptor */
 		mbuf = rte_rxmbuf_alloc(rxq->mp);
-		if (mbuf == NULL) {
+		if (unlikely(mbuf == NULL)) {
 			PMD_RX_LOG(ERR, "Error allocating mbuf in %s", __func__);
 			rxq->stats.rx_buf_alloc_failure++;
 			err = ENOMEM;
@@ -536,151 +536,141 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
 
-	if (rxq->stopped) {
+	if (unlikely(rxq->stopped)) {
 		PMD_RX_LOG(DEBUG, "Rx queue is stopped.");
 		return 0;
 	}
 
 	while (rcd->gen == rxq->comp_ring.gen) {
-
 		if (nb_rx >= nb_pkts)
 			break;
+
 		idx = rcd->rxdIdx;
 		ring_idx = (uint8_t)((rcd->rqID == rxq->qid1) ? 0 : 1);
 		rxd = (Vmxnet3_RxDesc *)rxq->cmd_ring[ring_idx].base + idx;
 		rbi = rxq->cmd_ring[ring_idx].buf_info + idx;
 
-		if (rcd->sop != 1 || rcd->eop != 1) {
+		if (unlikely(rcd->sop != 1 || rcd->eop != 1)) {
 			rte_pktmbuf_free_seg(rbi->m);
-
 			PMD_RX_LOG(DEBUG, "Packet spread across multiple buffers\n)");
 			goto rcd_done;
+		}
 
-		} else {
-
-			PMD_RX_LOG(DEBUG, "rxd idx: %d ring idx: %d.", idx, ring_idx);
+		PMD_RX_LOG(DEBUG, "rxd idx: %d ring idx: %d.", idx, ring_idx);
 
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-			VMXNET3_ASSERT(rcd->len <= rxd->len);
-			VMXNET3_ASSERT(rbi->m);
+		VMXNET3_ASSERT(rcd->len <= rxd->len);
+		VMXNET3_ASSERT(rbi->m);
 #endif
-			if (rcd->len == 0) {
-				PMD_RX_LOG(DEBUG, "Rx buf was skipped. rxring[%d][%d]\n)",
-					   ring_idx, idx);
+		if (unlikely(rcd->len == 0)) {
+			PMD_RX_LOG(DEBUG, "Rx buf was skipped. rxring[%d][%d]\n)",
+				   ring_idx, idx);
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-				VMXNET3_ASSERT(rcd->sop && rcd->eop);
+			VMXNET3_ASSERT(rcd->sop && rcd->eop);
 #endif
-				rte_pktmbuf_free_seg(rbi->m);
-
-				goto rcd_done;
-			}
+			rte_pktmbuf_free_seg(rbi->m);
+			goto rcd_done;
+		}
 
-			/* Assuming a packet is coming in a single packet buffer */
-			if (rxd->btype != VMXNET3_RXD_BTYPE_HEAD) {
-				PMD_RX_LOG(DEBUG,
-					   "Alert : Misbehaving device, incorrect "
-					   " buffer type used. iPacket dropped.");
-				rte_pktmbuf_free_seg(rbi->m);
-				goto rcd_done;
-			}
+		/* Assuming a packet is coming in a single packet buffer */
+		if (unlikely(rxd->btype != VMXNET3_RXD_BTYPE_HEAD)) {
+			PMD_RX_LOG(DEBUG,
+				   "Alert : Misbehaving device, incorrect "
+				   " buffer type used. iPacket dropped.");
+			rte_pktmbuf_free_seg(rbi->m);
+			goto rcd_done;
+		}
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-			VMXNET3_ASSERT(rxd->btype == VMXNET3_RXD_BTYPE_HEAD);
+		VMXNET3_ASSERT(rxd->btype == VMXNET3_RXD_BTYPE_HEAD);
 #endif
-			/* Get the packet buffer pointer from buf_info */
-			rxm = rbi->m;
-
-			/* Clear descriptor associated buf_info to be reused */
-			rbi->m = NULL;
-			rbi->bufPA = 0;
-
-			/* Update the index that we received a packet */
-			rxq->cmd_ring[ring_idx].next2comp = idx;
-
-			/* For RCD with EOP set, check if there is frame error */
-			if (rcd->err) {
-				rxq->stats.drop_total++;
-				rxq->stats.drop_err++;
-
-				if (!rcd->fcs) {
-					rxq->stats.drop_fcs++;
-					PMD_RX_LOG(ERR, "Recv packet dropped due to frame err.");
-				}
-				PMD_RX_LOG(ERR, "Error in received packet rcd#:%d rxd:%d",
-					   (int)(rcd - (struct Vmxnet3_RxCompDesc *)
-						 rxq->comp_ring.base), rcd->rxdIdx);
-				rte_pktmbuf_free_seg(rxm);
-
-				goto rcd_done;
-			}
+		/* Get the packet buffer pointer from buf_info */
+		rxm = rbi->m;
 
-			/* Check for hardware stripped VLAN tag */
-			if (rcd->ts) {
-				PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
-					   rcd->tci);
-				rxm->ol_flags = PKT_RX_VLAN_PKT;
-#ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-				VMXNET3_ASSERT(rxm &&
-					       rte_pktmbuf_mtod(rxm, void *));
-#endif
-				/* Copy vlan tag in packet buffer */
-				rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
-			} else {
-				rxm->ol_flags = 0;
-				rxm->vlan_tci = 0;
-			}
+		/* Clear descriptor associated buf_info to be reused */
+		rbi->m = NULL;
+		rbi->bufPA = 0;
 
-			/* Initialize newly received packet buffer */
-			rxm->port = rxq->port_id;
-			rxm->nb_segs = 1;
-			rxm->next = NULL;
-			rxm->pkt_len = (uint16_t)rcd->len;
-			rxm->data_len = (uint16_t)rcd->len;
-			rxm->port = rxq->port_id;
-			rxm->data_off = RTE_PKTMBUF_HEADROOM;
-
-			/* Check packet types, rx checksum errors, etc. Only support IPv4 so far. */
-			if (rcd->v4) {
-				struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
-				struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
-
-				if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
-					rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
-				else
-					rxm->ol_flags |= PKT_RX_IPV4_HDR;
-
-				if (!rcd->cnc) {
-					if (!rcd->ipc)
-						rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-
-					if ((rcd->tcp || rcd->udp) && !rcd->tuc)
-						rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-				}
-			}
+		/* Update the index that we received a packet */
+		rxq->cmd_ring[ring_idx].next2comp = idx;
 
-			rx_pkts[nb_rx++] = rxm;
-rcd_done:
-			rxq->cmd_ring[ring_idx].next2comp = idx;
-			VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
+		/* For RCD with EOP set, check if there is frame error */
+		if (unlikely(rcd->err)) {
+			rxq->stats.drop_total++;
+			rxq->stats.drop_err++;
 
-			/* It's time to allocate some new buf and renew descriptors */
-			vmxnet3_post_rx_bufs(rxq, ring_idx);
-			if (unlikely(rxq->shared->ctrl.updateRxProd)) {
-				VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN),
-						       rxq->cmd_ring[ring_idx].next2fill);
+			if (!rcd->fcs) {
+				rxq->stats.drop_fcs++;
+				PMD_RX_LOG(ERR, "Recv packet dropped due to frame err.");
 			}
+			PMD_RX_LOG(ERR, "Error in received packet rcd#:%d rxd:%d",
+				   (int)(rcd - (struct Vmxnet3_RxCompDesc *)
+					 rxq->comp_ring.base), rcd->rxdIdx);
+			rte_pktmbuf_free_seg(rxm);
+			goto rcd_done;
+		}
 
-			/* Advance to the next descriptor in comp_ring */
-			vmxnet3_comp_ring_adv_next2proc(&rxq->comp_ring);
+		/* Check for hardware stripped VLAN tag */
+		if (rcd->ts) {
+			PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
+				   rcd->tci);
+			rxm->ol_flags = PKT_RX_VLAN_PKT;
+			/* Copy vlan tag in packet buffer */
+			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
+		} else {
+			rxm->ol_flags = 0;
+			rxm->vlan_tci = 0;
+		}
 
-			rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
-			nb_rxd++;
-			if (nb_rxd > rxq->cmd_ring[0].size) {
-				PMD_RX_LOG(ERR,
-					   "Used up quota of receiving packets,"
-					   " relinquish control.");
-				break;
+		/* Initialize newly received packet buffer */
+		rxm->port = rxq->port_id;
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = (uint16_t)rcd->len;
+		rxm->data_len = (uint16_t)rcd->len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* Check packet type, checksum errors, etc. Only support IPv4 for now. */
+		if (rcd->v4) {
+			struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
+			struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
+
+			if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+				rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+			else
+				rxm->ol_flags |= PKT_RX_IPV4_HDR;
+
+			if (!rcd->cnc) {
+				if (!rcd->ipc)
+					rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+
+				if ((rcd->tcp || rcd->udp) && !rcd->tuc)
+					rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			}
 		}
+
+		rx_pkts[nb_rx++] = rxm;
+rcd_done:
+		rxq->cmd_ring[ring_idx].next2comp = idx;
+		VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
+
+		/* It's time to allocate some new buf and renew descriptors */
+		vmxnet3_post_rx_bufs(rxq, ring_idx);
+		if (unlikely(rxq->shared->ctrl.updateRxProd)) {
+			VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN),
+					       rxq->cmd_ring[ring_idx].next2fill);
+		}
+
+		/* Advance to the next descriptor in comp_ring */
+		vmxnet3_comp_ring_adv_next2proc(&rxq->comp_ring);
+
+		rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
+		nb_rxd++;
+		if (nb_rxd > rxq->cmd_ring[0].size) {
+			PMD_RX_LOG(ERR,
+				   "Used up quota of receiving packets,"
+				   " relinquish control.");
+			break;
+		}
 	}
 
 	return nb_rx;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH v2 6/6] vmxnet3: Leverage data_ring on tx path
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (4 preceding siblings ...)
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 5/6] vmxnet3: Perf improvement on the rx path Yong Wang
@ 2014-11-05  1:49 ` Yong Wang
  2014-11-13 22:07 ` [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Thomas Monjalon
  2014-11-14 16:39 ` Thomas Monjalon
  7 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2014-11-05  1:49 UTC (permalink / raw)
  To: dev

Data_ring is a pre-mapped guest ring buffer that vmxnet3
backend has access to directly without a need for buffer
address mapping and unmapping during packet transmission.
It is useful in reducing device emulation cost on the tx
path.  There are some additional cost though on the guest
driver for packet copy and overall it's a win.

This patch leverages the data_ring for packets with a
length less than or equal to the data_ring entry size
(128B).  For larger packet, we won't use the data_ring
as that requires one extra tx descriptor and it's not
clear if doing this will be beneficial.

Performance results show that this patch significantly
boosts vmxnet3 64B tx performance (pkt rate) for l2fwd
application on a Ivy Bridge server by >20% at which
point we start to hit some bottleneck on the rx side.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c |  7 +++---
 lib/librte_pmd_vmxnet3/vmxnet3_ring.h   | 13 +++++++---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c   | 43 +++++++++++++++++++++++++--------
 3 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
index c6e69f2..64789ac 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_ethdev.c
@@ -401,15 +401,17 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 
 	for (i = 0; i < hw->num_tx_queues; i++) {
 		Vmxnet3_TxQueueDesc *tqd = &hw->tqd_start[i];
-		vmxnet3_tx_queue_t *txq   = dev->data->tx_queues[i];
+		vmxnet3_tx_queue_t *txq  = dev->data->tx_queues[i];
 
 		tqd->ctrl.txNumDeferred  = 0;
 		tqd->ctrl.txThreshold    = 1;
 		tqd->conf.txRingBasePA   = txq->cmd_ring.basePA;
 		tqd->conf.compRingBasePA = txq->comp_ring.basePA;
+		tqd->conf.dataRingBasePA = txq->data_ring.basePA;
 
 		tqd->conf.txRingSize   = txq->cmd_ring.size;
 		tqd->conf.compRingSize = txq->comp_ring.size;
+		tqd->conf.dataRingSize = txq->data_ring.size;
 		tqd->conf.intrIdx      = txq->comp_ring.intr_idx;
 		tqd->status.stopped    = TRUE;
 		tqd->status.error      = 0;
@@ -418,7 +420,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 
 	for (i = 0; i < hw->num_rx_queues; i++) {
 		Vmxnet3_RxQueueDesc *rqd  = &hw->rqd_start[i];
-		vmxnet3_rx_queue_t *rxq    = dev->data->rx_queues[i];
+		vmxnet3_rx_queue_t *rxq   = dev->data->rx_queues[i];
 
 		rqd->conf.rxRingBasePA[0] = rxq->cmd_ring[0].basePA;
 		rqd->conf.rxRingBasePA[1] = rxq->cmd_ring[1].basePA;
@@ -583,7 +585,6 @@ vmxnet3_dev_close(struct rte_eth_dev *dev)
 
 	vmxnet3_dev_stop(dev);
 	hw->adapter_stopped = TRUE;
-
 }
 
 static void
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_ring.h b/lib/librte_pmd_vmxnet3/vmxnet3_ring.h
index 7a5dd5f..c5abdb6 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_ring.h
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_ring.h
@@ -51,9 +51,9 @@
 
 typedef struct vmxnet3_buf_info {
 	uint16_t               len;
-	struct rte_mbuf       *m;
-	uint64_t             bufPA;
-}vmxnet3_buf_info_t;
+	struct rte_mbuf        *m;
+	uint64_t               bufPA;
+} vmxnet3_buf_info_t;
 
 typedef struct vmxnet3_cmd_ring {
 	vmxnet3_buf_info_t     *buf_info;
@@ -104,6 +104,12 @@ typedef struct vmxnet3_comp_ring {
 	uint64_t	       basePA;
 } vmxnet3_comp_ring_t;
 
+struct vmxnet3_data_ring {
+	struct Vmxnet3_TxDataDesc *base;
+	uint32_t                  size;
+	uint64_t                  basePA;
+};
+
 static inline void
 vmxnet3_comp_ring_adv_next2proc(struct vmxnet3_comp_ring *ring)
 {
@@ -143,6 +149,7 @@ typedef struct vmxnet3_tx_queue {
 	struct vmxnet3_hw            *hw;
 	struct vmxnet3_cmd_ring      cmd_ring;
 	struct vmxnet3_comp_ring     comp_ring;
+	struct vmxnet3_data_ring     data_ring;
 	uint32_t                     qid;
 	struct Vmxnet3_TxQueueDesc   *shared;
 	struct vmxnet3_txq_stats     stats;
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 4799f4d..e138f9c 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -78,7 +78,6 @@
 #include "vmxnet3_logs.h"
 #include "vmxnet3_ethdev.h"
 
-
 #define RTE_MBUF_DATA_DMA_ADDR(mb) \
 	(uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
 
@@ -144,11 +143,12 @@ vmxnet3_txq_dump(struct vmxnet3_tx_queue *txq)
 	if (txq == NULL)
 		return;
 
-	PMD_TX_LOG(DEBUG, "TXQ: cmd base : 0x%p comp ring base : 0x%p.",
-		   txq->cmd_ring.base, txq->comp_ring.base);
-	PMD_TX_LOG(DEBUG, "TXQ: cmd basePA : 0x%lx comp ring basePA : 0x%lx.",
+	PMD_TX_LOG(DEBUG, "TXQ: cmd base : 0x%p comp ring base : 0x%p data ring base : 0x%p.",
+		   txq->cmd_ring.base, txq->comp_ring.base, txq->data_ring.base);
+	PMD_TX_LOG(DEBUG, "TXQ: cmd basePA : 0x%lx comp ring basePA : 0x%lx data ring basePA : 0x%lx.",
 		   (unsigned long)txq->cmd_ring.basePA,
-		   (unsigned long)txq->comp_ring.basePA);
+		   (unsigned long)txq->comp_ring.basePA,
+		   (unsigned long)txq->data_ring.basePA);
 
 	avail = vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring);
 	PMD_TX_LOG(DEBUG, "TXQ: size=%u; free=%u; next2proc=%u; queued=%u",
@@ -213,6 +213,7 @@ vmxnet3_dev_tx_queue_reset(void *txq)
 	vmxnet3_tx_queue_t *tq = txq;
 	struct vmxnet3_cmd_ring *ring = &tq->cmd_ring;
 	struct vmxnet3_comp_ring *comp_ring = &tq->comp_ring;
+	struct vmxnet3_data_ring *data_ring = &tq->data_ring;
 	int size;
 
 	if (tq != NULL) {
@@ -229,6 +230,7 @@ vmxnet3_dev_tx_queue_reset(void *txq)
 
 	size = sizeof(struct Vmxnet3_TxDesc) * ring->size;
 	size += sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size;
+	size += sizeof(struct Vmxnet3_TxDataDesc) * data_ring->size;
 
 	memset(ring->base, 0, size);
 }
@@ -342,7 +344,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	hw = txq->hw;
 
-	if (txq->stopped) {
+	if (unlikely(txq->stopped)) {
 		PMD_TX_LOG(DEBUG, "Tx queue is stopped.");
 		return 0;
 	}
@@ -354,6 +356,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	while (nb_tx < nb_pkts) {
 
 		if (vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring)) {
+			int copy_size = 0;
 
 			txm = tx_pkts[nb_tx];
 			/* Don't support scatter packets yet, free them if met */
@@ -377,11 +380,23 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			}
 
 			txd = (Vmxnet3_TxDesc *)(txq->cmd_ring.base + txq->cmd_ring.next2fill);
+			if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
+				struct Vmxnet3_TxDataDesc *tdd;
+			
+				tdd = txq->data_ring.base + txq->cmd_ring.next2fill;
+				copy_size = rte_pktmbuf_pkt_len(txm);
+				rte_memcpy(tdd->data, rte_pktmbuf_mtod(txm, char *), copy_size);
+			}
 
 			/* Fill the tx descriptor */
 			tbi = txq->cmd_ring.buf_info + txq->cmd_ring.next2fill;
 			tbi->bufPA = RTE_MBUF_DATA_DMA_ADDR(txm);
-			txd->addr = tbi->bufPA;
+			if (copy_size)
+				txd->addr = rte_cpu_to_le_64(txq->data_ring.basePA +
+							txq->cmd_ring.next2fill *
+							sizeof(struct Vmxnet3_TxDataDesc));
+			else
+				txd->addr = tbi->bufPA;
 			txd->len = txm->data_len;
 
 			/* Mark the last descriptor as End of Packet. */
@@ -707,11 +722,12 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			   unsigned int socket_id,
 			   __attribute__((unused)) const struct rte_eth_txconf *tx_conf)
 {
-	struct vmxnet3_hw     *hw = dev->data->dev_private;
+	struct vmxnet3_hw *hw = dev->data->dev_private;
 	const struct rte_memzone *mz;
 	struct vmxnet3_tx_queue *txq;
 	struct vmxnet3_cmd_ring *ring;
 	struct vmxnet3_comp_ring *comp_ring;
+	struct vmxnet3_data_ring *data_ring;
 	int size;
 
 	PMD_INIT_FUNC_TRACE();
@@ -743,6 +759,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	ring = &txq->cmd_ring;
 	comp_ring = &txq->comp_ring;
+	data_ring = &txq->data_ring;
 
 	/* Tx vmxnet ring length should be between 512-4096 */
 	if (nb_desc < VMXNET3_DEF_TX_RING_SIZE) {
@@ -757,7 +774,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		ring->size = nb_desc;
 		ring->size &= ~VMXNET3_RING_SIZE_MASK;
 	}
-	comp_ring->size = ring->size;
+	comp_ring->size = data_ring->size = ring->size;
 
 	/* Tx vmxnet rings structure initialization*/
 	ring->next2fill = 0;
@@ -768,6 +785,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 
 	size = sizeof(struct Vmxnet3_TxDesc) * ring->size;
 	size += sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size;
+	size += sizeof(struct Vmxnet3_TxDataDesc) * data_ring->size;
 
 	mz = ring_dma_zone_reserve(dev, "txdesc", queue_idx, size, socket_id);
 	if (mz == NULL) {
@@ -785,6 +803,11 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	comp_ring->basePA = ring->basePA +
 		(sizeof(struct Vmxnet3_TxDesc) * ring->size);
 
+	/* data_ring initialization */
+	data_ring->base = (Vmxnet3_TxDataDesc *)(comp_ring->base + comp_ring->size);
+	data_ring->basePA = comp_ring->basePA +
+			(sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size);
+
 	/* cmd_ring0 buf_info allocation */
 	ring->buf_info = rte_zmalloc("tx_ring_buf_info",
 				     ring->size * sizeof(vmxnet3_buf_info_t), CACHE_LINE_SIZE);
@@ -895,7 +918,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	ring1->basePA = ring0->basePA + sizeof(struct Vmxnet3_RxDesc) * ring0->size;
 
 	/* comp_ring initialization */
-	comp_ring->base = ring1->base +  ring1->size;
+	comp_ring->base = ring1->base + ring1->size;
 	comp_ring->basePA = ring1->basePA + sizeof(struct Vmxnet3_RxDesc) *
 		ring1->size;
 
-- 
1.9.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (5 preceding siblings ...)
  2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 6/6] vmxnet3: Leverage data_ring on tx path Yong Wang
@ 2014-11-13 22:07 ` Thomas Monjalon
  2014-11-14  1:38   ` Cao, Waterman
  2014-11-14 16:39 ` Thomas Monjalon
  7 siblings, 1 reply; 10+ messages in thread
From: Thomas Monjalon @ 2014-11-13 22:07 UTC (permalink / raw)
  To: Waterman Cao; +Cc: dev

Hi Waterman,

You wanted to update your regression tests:
	http://dpdk.org/ml/archives/dev/2014-November/007598.html
Should I wait a test report before integrating these patches?

Is there someone else reviewing these patches?

-- 
Thomas


2014-11-04 17:49, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
> 
> V2:
> - Add more commit descriptions
> - Add a new patch that improve tx performance for small packet
> 
> Yong Wang (6):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Perf improvement on the rx path
>   vmxnet3: Leverage data_ring on tx path

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement
  2014-11-13 22:07 ` [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Thomas Monjalon
@ 2014-11-14  1:38   ` Cao, Waterman
  0 siblings, 0 replies; 10+ messages in thread
From: Cao, Waterman @ 2014-11-14  1:38 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Hi Thomas,

	I think that you can integrate this patch firstly.
	We will update our regression to cover new features.
	Currently, Xiaonan is checking with yong and try to understand how to verify new features.

	Thanks
Waterman 

-----Original Message-----
>From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
>Sent: Friday, November 14, 2014 6:07 AM
>To: Cao, Waterman
>Cc: dev@dpdk.org; Yong Wang; Zhang, XiaonanX
>Subject: Re: [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement
>
>Hi Waterman,
>
>You wanted to update your regression tests:
>	http://dpdk.org/ml/archives/dev/2014-November/007598.html
>Should I wait a test report before integrating these patches?
>
>Is there someone else reviewing these patches?
>
>-- 
>Thomas
>
>
>2014-11-04 17:49, Yong Wang:
>> This patch series include various fixes and improvement to the
>> vmxnet3 pmd driver.
>> 
>> V2:
>> - Add more commit descriptions
>> - Add a new patch that improve tx performance for small packet
>> 
>> Yong Wang (6):
>>   vmxnet3: Fix VLAN Rx stripping
>>   vmxnet3: Add VLAN Tx offload
>>   vmxnet3: Fix dev stop/restart bug
>>   vmxnet3: Add rx pkt check offloads
>>   vmxnet3: Perf improvement on the rx path
>>   vmxnet3: Leverage data_ring on tx path

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement
  2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (6 preceding siblings ...)
  2014-11-13 22:07 ` [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Thomas Monjalon
@ 2014-11-14 16:39 ` Thomas Monjalon
  7 siblings, 0 replies; 10+ messages in thread
From: Thomas Monjalon @ 2014-11-14 16:39 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

2014-11-04 17:49, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
> 
> V2:
> - Add more commit descriptions
> - Add a new patch that improve tx performance for small packet
> 
> Yong Wang (6):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Perf improvement on the rx path
>   vmxnet3: Leverage data_ring on tx path

Applied

Thanks for these nice improvements
-- 
Thomas

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-11-14 16:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-05  1:49 [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 1/6] vmxnet3: Fix VLAN Rx stripping Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 2/6] vmxnet3: Add VLAN Tx offload Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 3/6] vmxnet3: Fix dev stop/restart bug Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 4/6] vmxnet3: Add rx pkt check offloads Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 5/6] vmxnet3: Perf improvement on the rx path Yong Wang
2014-11-05  1:49 ` [dpdk-dev] [PATCH v2 6/6] vmxnet3: Leverage data_ring on tx path Yong Wang
2014-11-13 22:07 ` [dpdk-dev] [PATCH v2 0/6] vmxnet3 pmd fixes/improvement Thomas Monjalon
2014-11-14  1:38   ` Cao, Waterman
2014-11-14 16:39 ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).