DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
@ 2014-10-13  6:23 Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping Yong Wang
                   ` (6 more replies)
  0 siblings, 7 replies; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

This patch series include various fixes and improvement to the
vmxnet3 pmd driver.

Yong Wang (5):
  vmxnet3: Fix VLAN Rx stripping
  vmxnet3: Add VLAN Tx offload
  vmxnet3: Fix dev stop/restart bug
  vmxnet3: Add rx pkt check offloads
  vmxnet3: Some perf improvement on the rx path

 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 310 +++++++++++++++++++++-------------
 1 file changed, 195 insertions(+), 115 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
@ 2014-10-13  6:23 ` Yong Wang
  2014-10-13  9:31   ` Stephen Hemminger
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 2/5] vmxnet3: Add VLAN Tx offload Yong Wang
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

Shouldn't reset vlan_tci to 0 if a valid VLAN tag is stripped.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 263f9ce..986e5e5 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -540,21 +540,19 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 			/* Check for hardware stripped VLAN tag */
 			if (rcd->ts) {
-
 				PMD_RX_LOG(ERR, "Received packet with vlan ID: %d.",
 					   rcd->tci);
 				rxm->ol_flags = PKT_RX_VLAN_PKT;
-
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
 				VMXNET3_ASSERT(rxm &&
 					       rte_pktmbuf_mtod(rxm, void *));
 #endif
 				/* Copy vlan tag in packet buffer */
-				rxm->vlan_tci = rte_le_to_cpu_16(
-						(uint16_t)rcd->tci);
-
-			} else
+				rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
+			} else {
 				rxm->ol_flags = 0;
+				rxm->vlan_tci = 0;
+			}
 
 			/* Initialize newly received packet buffer */
 			rxm->port = rxq->port_id;
@@ -563,11 +561,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rxm->pkt_len = (uint16_t)rcd->len;
 			rxm->data_len = (uint16_t)rcd->len;
 			rxm->port = rxq->port_id;
-			rxm->vlan_tci = 0;
 			rxm->data_off = RTE_PKTMBUF_HEADROOM;
 
 			rx_pkts[nb_rx++] = rxm;
-
 rcd_done:
 			rxq->cmd_ring[ring_idx].next2comp = idx;
 			VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 2/5] vmxnet3: Add VLAN Tx offload
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping Yong Wang
@ 2014-10-13  6:23 ` Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 3/5] vmxnet3: Fix dev stop/restart bug Yong Wang
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 986e5e5..0b6363f 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -319,6 +319,12 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			txd->cq = 1;
 			txd->eop = 1;
 
+			/* Add VLAN tag if requested */
+			if (txm->ol_flags & PKT_TX_VLAN_PKT) {
+				txd->ti = 1;
+				txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
+			}
+
 			/* Record current mbuf for freeing it later in tx complete */
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
 			VMXNET3_ASSERT(txm);
-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 3/5] vmxnet3: Fix dev stop/restart bug
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 2/5] vmxnet3: Add VLAN Tx offload Yong Wang
@ 2014-10-13  6:23 ` Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 4/5] vmxnet3: Add rx pkt check offloads Yong Wang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

This change makes vmxnet3 consistent with other pmds in
terms of dev_stop behavior: rather than releasing tx/rx
rings, it only resets the ring structure and release the
pending mbufs.

Verified with various tests (test-pmd and pktgen) over
vmxnet3 that dev stop/restart works fine.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 78 ++++++++++++++++++++++++++++++++---
 1 file changed, 73 insertions(+), 5 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 0b6363f..2017d4b 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -157,7 +157,7 @@ vmxnet3_txq_dump(struct vmxnet3_tx_queue *txq)
 #endif
 
 static inline void
-vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
+vmxnet3_cmd_ring_release_mbufs(vmxnet3_cmd_ring_t *ring)
 {
 	while (ring->next2comp != ring->next2fill) {
 		/* No need to worry about tx desc ownership, device is quiesced by now. */
@@ -171,16 +171,23 @@ vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
 		}
 		vmxnet3_cmd_ring_adv_next2comp(ring);
 	}
+}
+
+static void
+vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring)
+{
+	vmxnet3_cmd_ring_release_mbufs(ring);
 	rte_free(ring->buf_info);
 	ring->buf_info = NULL;
 }
 
+
 void
 vmxnet3_dev_tx_queue_release(void *txq)
 {
 	vmxnet3_tx_queue_t *tq = txq;
 
-	if (txq != NULL) {
+	if (tq != NULL) {
 		/* Release the cmd_ring */
 		vmxnet3_cmd_ring_release(&tq->cmd_ring);
 	}
@@ -192,13 +199,74 @@ vmxnet3_dev_rx_queue_release(void *rxq)
 	int i;
 	vmxnet3_rx_queue_t *rq = rxq;
 
-	if (rxq != NULL) {
+	if (rq != NULL) {
 		/* Release both the cmd_rings */
 		for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++)
 			vmxnet3_cmd_ring_release(&rq->cmd_ring[i]);
 	}
 }
 
+static void
+vmxnet3_dev_tx_queue_reset(void *txq)
+{
+	vmxnet3_tx_queue_t *tq = txq;
+	struct vmxnet3_cmd_ring *ring = &tq->cmd_ring;
+	struct vmxnet3_comp_ring *comp_ring = &tq->comp_ring;
+	int size;
+
+	if (tq != NULL) {
+		/* Release the cmd_ring mbufs */
+		vmxnet3_cmd_ring_release_mbufs(&tq->cmd_ring);
+	}
+
+	/* Tx vmxnet rings structure initialization*/
+	ring->next2fill = 0;
+	ring->next2comp = 0;
+	ring->gen = VMXNET3_INIT_GEN;
+	comp_ring->next2proc = 0;
+	comp_ring->gen = VMXNET3_INIT_GEN;
+
+	size = sizeof(struct Vmxnet3_TxDesc) * ring->size;
+	size += sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size;
+
+	memset(ring->base, 0, size);
+}
+
+static void
+vmxnet3_dev_rx_queue_reset(void *rxq)
+{
+	int i;
+	vmxnet3_rx_queue_t *rq = rxq;
+	struct vmxnet3_cmd_ring *ring0, *ring1;
+	struct vmxnet3_comp_ring *comp_ring;
+	int size;
+
+	if (rq != NULL) {
+		/* Release both the cmd_rings mbufs */
+		for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++)
+			vmxnet3_cmd_ring_release_mbufs(&rq->cmd_ring[i]);
+	}
+
+	ring0 = &rq->cmd_ring[0];
+	ring1 = &rq->cmd_ring[1];
+	comp_ring = &rq->comp_ring;
+
+	/* Rx vmxnet rings structure initialization */
+	ring0->next2fill = 0;
+	ring1->next2fill = 0;
+	ring0->next2comp = 0;
+	ring1->next2comp = 0;
+	ring0->gen = VMXNET3_INIT_GEN;
+	ring1->gen = VMXNET3_INIT_GEN;
+	comp_ring->next2proc = 0;
+	comp_ring->gen = VMXNET3_INIT_GEN;
+
+	size = sizeof(struct Vmxnet3_RxDesc) * (ring0->size + ring1->size);
+	size += sizeof(struct Vmxnet3_RxCompDesc) * comp_ring->size;
+
+	memset(ring0->base, 0, size);
+}
+
 void
 vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 {
@@ -211,7 +279,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 
 		if (txq != NULL) {
 			txq->stopped = TRUE;
-			vmxnet3_dev_tx_queue_release(txq);
+			vmxnet3_dev_tx_queue_reset(txq);
 		}
 	}
 
@@ -220,7 +288,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
 
 		if (rxq != NULL) {
 			rxq->stopped = TRUE;
-			vmxnet3_dev_rx_queue_release(rxq);
+			vmxnet3_dev_rx_queue_reset(rxq);
 		}
 	}
 }
-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 4/5] vmxnet3: Add rx pkt check offloads
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (2 preceding siblings ...)
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 3/5] vmxnet3: Fix dev stop/restart bug Yong Wang
@ 2014-10-13  6:23 ` Yong Wang
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path Yong Wang
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

Only supports IPv4 so far.

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 2017d4b..e2fb8a8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -65,6 +65,7 @@
 #include <rte_ether.h>
 #include <rte_ethdev.h>
 #include <rte_prefetch.h>
+#include <rte_ip.h>
 #include <rte_udp.h>
 #include <rte_tcp.h>
 #include <rte_sctp.h>
@@ -614,7 +615,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 			/* Check for hardware stripped VLAN tag */
 			if (rcd->ts) {
-				PMD_RX_LOG(ERR, "Received packet with vlan ID: %d.",
+				PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
 					   rcd->tci);
 				rxm->ol_flags = PKT_RX_VLAN_PKT;
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
@@ -637,6 +638,25 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rxm->port = rxq->port_id;
 			rxm->data_off = RTE_PKTMBUF_HEADROOM;
 
+			/* Check packet types, rx checksum errors, etc. Only support IPv4 so far. */
+			if (rcd->v4) {
+				struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
+				struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
+
+				if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+					rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+				else
+					rxm->ol_flags |= PKT_RX_IPV4_HDR;
+
+				if (!rcd->cnc) {
+					if (!rcd->ipc)
+						rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+
+					if ((rcd->tcp || rcd->udp) && !rcd->tuc)
+						rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				}
+			}
+
 			rx_pkts[nb_rx++] = rxm;
 rcd_done:
 			rxq->cmd_ring[ring_idx].next2comp = idx;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (3 preceding siblings ...)
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 4/5] vmxnet3: Add rx pkt check offloads Yong Wang
@ 2014-10-13  6:23 ` Yong Wang
  2014-11-05  0:13   ` Thomas Monjalon
  2014-10-13 20:29 ` [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thomas Monjalon
  2014-11-04  5:57 ` Zhang, XiaonanX
  6 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-13  6:23 UTC (permalink / raw)
  To: dev

Signed-off-by: Yong Wang <yongwang@vmware.com>
---
 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 242 ++++++++++++++++------------------
 1 file changed, 116 insertions(+), 126 deletions(-)

diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index e2fb8a8..4799f4d 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -451,6 +451,19 @@ vmxnet3_post_rx_bufs(vmxnet3_rx_queue_t *rxq, uint8_t ring_id)
 	uint32_t i = 0, val = 0;
 	struct vmxnet3_cmd_ring *ring = &rxq->cmd_ring[ring_id];
 
+	if (ring_id == 0) {
+		/* Usually: One HEAD type buf per packet
+		 * val = (ring->next2fill % rxq->hw->bufs_per_pkt) ?
+		 * VMXNET3_RXD_BTYPE_BODY : VMXNET3_RXD_BTYPE_HEAD;
+		 */
+
+		/* We use single packet buffer so all heads here */
+		val = VMXNET3_RXD_BTYPE_HEAD;
+	} else {
+		/* All BODY type buffers for 2nd ring */
+		val = VMXNET3_RXD_BTYPE_BODY;
+	}
+
 	while (vmxnet3_cmd_ring_desc_avail(ring) > 0) {
 		struct Vmxnet3_RxDesc *rxd;
 		struct rte_mbuf *mbuf;
@@ -458,22 +471,9 @@ vmxnet3_post_rx_bufs(vmxnet3_rx_queue_t *rxq, uint8_t ring_id)
 
 		rxd = (struct Vmxnet3_RxDesc *)(ring->base + ring->next2fill);
 
-		if (ring->rid == 0) {
-			/* Usually: One HEAD type buf per packet
-			 * val = (ring->next2fill % rxq->hw->bufs_per_pkt) ?
-			 * VMXNET3_RXD_BTYPE_BODY : VMXNET3_RXD_BTYPE_HEAD;
-			 */
-
-			/* We use single packet buffer so all heads here */
-			val = VMXNET3_RXD_BTYPE_HEAD;
-		} else {
-			/* All BODY type buffers for 2nd ring; which won't be used at all by ESXi */
-			val = VMXNET3_RXD_BTYPE_BODY;
-		}
-
 		/* Allocate blank mbuf for the current Rx Descriptor */
 		mbuf = rte_rxmbuf_alloc(rxq->mp);
-		if (mbuf == NULL) {
+		if (unlikely(mbuf == NULL)) {
 			PMD_RX_LOG(ERR, "Error allocating mbuf in %s", __func__);
 			rxq->stats.rx_buf_alloc_failure++;
 			err = ENOMEM;
@@ -536,151 +536,141 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 
 	rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
 
-	if (rxq->stopped) {
+	if (unlikely(rxq->stopped)) {
 		PMD_RX_LOG(DEBUG, "Rx queue is stopped.");
 		return 0;
 	}
 
 	while (rcd->gen == rxq->comp_ring.gen) {
-
 		if (nb_rx >= nb_pkts)
 			break;
+
 		idx = rcd->rxdIdx;
 		ring_idx = (uint8_t)((rcd->rqID == rxq->qid1) ? 0 : 1);
 		rxd = (Vmxnet3_RxDesc *)rxq->cmd_ring[ring_idx].base + idx;
 		rbi = rxq->cmd_ring[ring_idx].buf_info + idx;
 
-		if (rcd->sop != 1 || rcd->eop != 1) {
+		if (unlikely(rcd->sop != 1 || rcd->eop != 1)) {
 			rte_pktmbuf_free_seg(rbi->m);
-
 			PMD_RX_LOG(DEBUG, "Packet spread across multiple buffers\n)");
 			goto rcd_done;
+		}
 
-		} else {
-
-			PMD_RX_LOG(DEBUG, "rxd idx: %d ring idx: %d.", idx, ring_idx);
+		PMD_RX_LOG(DEBUG, "rxd idx: %d ring idx: %d.", idx, ring_idx);
 
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-			VMXNET3_ASSERT(rcd->len <= rxd->len);
-			VMXNET3_ASSERT(rbi->m);
+		VMXNET3_ASSERT(rcd->len <= rxd->len);
+		VMXNET3_ASSERT(rbi->m);
 #endif
-			if (rcd->len == 0) {
-				PMD_RX_LOG(DEBUG, "Rx buf was skipped. rxring[%d][%d]\n)",
-					   ring_idx, idx);
+		if (unlikely(rcd->len == 0)) {
+			PMD_RX_LOG(DEBUG, "Rx buf was skipped. rxring[%d][%d]\n)",
+				   ring_idx, idx);
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-				VMXNET3_ASSERT(rcd->sop && rcd->eop);
+			VMXNET3_ASSERT(rcd->sop && rcd->eop);
 #endif
-				rte_pktmbuf_free_seg(rbi->m);
-
-				goto rcd_done;
-			}
+			rte_pktmbuf_free_seg(rbi->m);
+			goto rcd_done;
+		}
 
-			/* Assuming a packet is coming in a single packet buffer */
-			if (rxd->btype != VMXNET3_RXD_BTYPE_HEAD) {
-				PMD_RX_LOG(DEBUG,
-					   "Alert : Misbehaving device, incorrect "
-					   " buffer type used. iPacket dropped.");
-				rte_pktmbuf_free_seg(rbi->m);
-				goto rcd_done;
-			}
+		/* Assuming a packet is coming in a single packet buffer */
+		if (unlikely(rxd->btype != VMXNET3_RXD_BTYPE_HEAD)) {
+			PMD_RX_LOG(DEBUG,
+				   "Alert : Misbehaving device, incorrect "
+				   " buffer type used. iPacket dropped.");
+			rte_pktmbuf_free_seg(rbi->m);
+			goto rcd_done;
+		}
 #ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-			VMXNET3_ASSERT(rxd->btype == VMXNET3_RXD_BTYPE_HEAD);
+		VMXNET3_ASSERT(rxd->btype == VMXNET3_RXD_BTYPE_HEAD);
 #endif
-			/* Get the packet buffer pointer from buf_info */
-			rxm = rbi->m;
-
-			/* Clear descriptor associated buf_info to be reused */
-			rbi->m = NULL;
-			rbi->bufPA = 0;
-
-			/* Update the index that we received a packet */
-			rxq->cmd_ring[ring_idx].next2comp = idx;
-
-			/* For RCD with EOP set, check if there is frame error */
-			if (rcd->err) {
-				rxq->stats.drop_total++;
-				rxq->stats.drop_err++;
-
-				if (!rcd->fcs) {
-					rxq->stats.drop_fcs++;
-					PMD_RX_LOG(ERR, "Recv packet dropped due to frame err.");
-				}
-				PMD_RX_LOG(ERR, "Error in received packet rcd#:%d rxd:%d",
-					   (int)(rcd - (struct Vmxnet3_RxCompDesc *)
-						 rxq->comp_ring.base), rcd->rxdIdx);
-				rte_pktmbuf_free_seg(rxm);
-
-				goto rcd_done;
-			}
+		/* Get the packet buffer pointer from buf_info */
+		rxm = rbi->m;
 
-			/* Check for hardware stripped VLAN tag */
-			if (rcd->ts) {
-				PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
-					   rcd->tci);
-				rxm->ol_flags = PKT_RX_VLAN_PKT;
-#ifdef RTE_LIBRTE_VMXNET3_DEBUG_DRIVER
-				VMXNET3_ASSERT(rxm &&
-					       rte_pktmbuf_mtod(rxm, void *));
-#endif
-				/* Copy vlan tag in packet buffer */
-				rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
-			} else {
-				rxm->ol_flags = 0;
-				rxm->vlan_tci = 0;
-			}
+		/* Clear descriptor associated buf_info to be reused */
+		rbi->m = NULL;
+		rbi->bufPA = 0;
 
-			/* Initialize newly received packet buffer */
-			rxm->port = rxq->port_id;
-			rxm->nb_segs = 1;
-			rxm->next = NULL;
-			rxm->pkt_len = (uint16_t)rcd->len;
-			rxm->data_len = (uint16_t)rcd->len;
-			rxm->port = rxq->port_id;
-			rxm->data_off = RTE_PKTMBUF_HEADROOM;
-
-			/* Check packet types, rx checksum errors, etc. Only support IPv4 so far. */
-			if (rcd->v4) {
-				struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
-				struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
-
-				if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
-					rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
-				else
-					rxm->ol_flags |= PKT_RX_IPV4_HDR;
-
-				if (!rcd->cnc) {
-					if (!rcd->ipc)
-						rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-
-					if ((rcd->tcp || rcd->udp) && !rcd->tuc)
-						rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
-				}
-			}
+		/* Update the index that we received a packet */
+		rxq->cmd_ring[ring_idx].next2comp = idx;
 
-			rx_pkts[nb_rx++] = rxm;
-rcd_done:
-			rxq->cmd_ring[ring_idx].next2comp = idx;
-			VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
+		/* For RCD with EOP set, check if there is frame error */
+		if (unlikely(rcd->err)) {
+			rxq->stats.drop_total++;
+			rxq->stats.drop_err++;
 
-			/* It's time to allocate some new buf and renew descriptors */
-			vmxnet3_post_rx_bufs(rxq, ring_idx);
-			if (unlikely(rxq->shared->ctrl.updateRxProd)) {
-				VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN),
-						       rxq->cmd_ring[ring_idx].next2fill);
+			if (!rcd->fcs) {
+				rxq->stats.drop_fcs++;
+				PMD_RX_LOG(ERR, "Recv packet dropped due to frame err.");
 			}
+			PMD_RX_LOG(ERR, "Error in received packet rcd#:%d rxd:%d",
+				   (int)(rcd - (struct Vmxnet3_RxCompDesc *)
+					 rxq->comp_ring.base), rcd->rxdIdx);
+			rte_pktmbuf_free_seg(rxm);
+			goto rcd_done;
+		}
 
-			/* Advance to the next descriptor in comp_ring */
-			vmxnet3_comp_ring_adv_next2proc(&rxq->comp_ring);
+		/* Check for hardware stripped VLAN tag */
+		if (rcd->ts) {
+			PMD_RX_LOG(DEBUG, "Received packet with vlan ID: %d.",
+				   rcd->tci);
+			rxm->ol_flags = PKT_RX_VLAN_PKT;
+			/* Copy vlan tag in packet buffer */
+			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
+		} else {
+			rxm->ol_flags = 0;
+			rxm->vlan_tci = 0;
+		}
 
-			rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
-			nb_rxd++;
-			if (nb_rxd > rxq->cmd_ring[0].size) {
-				PMD_RX_LOG(ERR,
-					   "Used up quota of receiving packets,"
-					   " relinquish control.");
-				break;
+		/* Initialize newly received packet buffer */
+		rxm->port = rxq->port_id;
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = (uint16_t)rcd->len;
+		rxm->data_len = (uint16_t)rcd->len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* Check packet type, checksum errors, etc. Only support IPv4 for now. */
+		if (rcd->v4) {
+			struct ether_hdr *eth = rte_pktmbuf_mtod(rxm, struct ether_hdr *);
+			struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
+
+			if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+				rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+			else
+				rxm->ol_flags |= PKT_RX_IPV4_HDR;
+
+			if (!rcd->cnc) {
+				if (!rcd->ipc)
+					rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+
+				if ((rcd->tcp || rcd->udp) && !rcd->tuc)
+					rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
 			}
 		}
+
+		rx_pkts[nb_rx++] = rxm;
+rcd_done:
+		rxq->cmd_ring[ring_idx].next2comp = idx;
+		VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size);
+
+		/* It's time to allocate some new buf and renew descriptors */
+		vmxnet3_post_rx_bufs(rxq, ring_idx);
+		if (unlikely(rxq->shared->ctrl.updateRxProd)) {
+			VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN),
+					       rxq->cmd_ring[ring_idx].next2fill);
+		}
+
+		/* Advance to the next descriptor in comp_ring */
+		vmxnet3_comp_ring_adv_next2proc(&rxq->comp_ring);
+
+		rcd = &rxq->comp_ring.base[rxq->comp_ring.next2proc].rcd;
+		nb_rxd++;
+		if (nb_rxd > rxq->cmd_ring[0].size) {
+			PMD_RX_LOG(ERR,
+				   "Used up quota of receiving packets,"
+				   " relinquish control.");
+			break;
+		}
 	}
 
 	return nb_rx;
-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping Yong Wang
@ 2014-10-13  9:31   ` Stephen Hemminger
  2014-10-13 18:42     ` Yong Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Stephen Hemminger @ 2014-10-13  9:31 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

On Sun, 12 Oct 2014 23:23:05 -0700
Yong Wang <yongwang@vmware.com> wrote:

> Shouldn't reset vlan_tci to 0 if a valid VLAN tag is stripped.
> 
> Signed-off-by: Yong Wang <yongwang@vmware.com>

Since vlan_tci is initialized to zero by rte_pktmbuf layer,
the driver shouldn't be messing with it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-13  9:31   ` Stephen Hemminger
@ 2014-10-13 18:42     ` Yong Wang
  2014-10-22 13:39       ` Stephen Hemminger
  0 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-13 18:42 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

Are you referring to the patch as a whole or your comment is about the reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am not sure I get your comments here.  This patch simply fixes a bug on the rx vlan stripping path (where valid vlan_tci stripped is overwritten unconditionally later on the rx path in the original vmxnet3 pmd driver). All the other pmd drivers are doing the same thing in terms of translating descriptor status to rte_mbuf flags for vlan stripping.
________________________________________
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Monday, October 13, 2014 2:31 AM
To: Yong Wang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping

On Sun, 12 Oct 2014 23:23:05 -0700
Yong Wang <yongwang@vmware.com> wrote:

> Shouldn't reset vlan_tci to 0 if a valid VLAN tag is stripped.
>
> Signed-off-by: Yong Wang <yongwang@vmware.com>

Since vlan_tci is initialized to zero by rte_pktmbuf layer,
the driver shouldn't be messing with it.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (4 preceding siblings ...)
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path Yong Wang
@ 2014-10-13 20:29 ` Thomas Monjalon
  2014-10-13 21:00   ` Yong Wang
  2014-11-04  5:57 ` Zhang, XiaonanX
  6 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2014-10-13 20:29 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
> 
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-13 20:29 ` [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thomas Monjalon
@ 2014-10-13 21:00   ` Yong Wang
  2014-10-21 22:10     ` Yong Wang
  2014-11-05  1:32     ` Cao, Waterman
  0 siblings, 2 replies; 26+ messages in thread
From: Yong Wang @ 2014-10-13 21:00 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.

We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.

Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon@6wind.com>
Sent: Monday, October 13, 2014 1:29 PM
To: Yong Wang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
>
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
--
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-13 21:00   ` Yong Wang
@ 2014-10-21 22:10     ` Yong Wang
  2014-10-22  7:07       ` Cao, Waterman
  2014-11-05  1:32     ` Cao, Waterman
  1 sibling, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-21 22:10 UTC (permalink / raw)
  To: Patel, Rashmin N, Stephen Hemminger; +Cc: dev

Rashmin/Stephen,

Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.

Yong
________________________________________
From: dev <dev-bounces@dpdk.org> on behalf of Yong Wang <yongwang@vmware.com>
Sent: Monday, October 13, 2014 2:00 PM
To: Thomas Monjalon
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.

We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.

Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon@6wind.com>
Sent: Monday, October 13, 2014 1:29 PM
To: Yong Wang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
>
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
--
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-21 22:10     ` Yong Wang
@ 2014-10-22  7:07       ` Cao, Waterman
  2014-10-28 14:40         ` Thomas Monjalon
  0 siblings, 1 reply; 26+ messages in thread
From: Cao, Waterman @ 2014-10-22  7:07 UTC (permalink / raw)
  To: Yong Wang, Patel, Rashmin N, Stephen Hemminger; +Cc: dev

Hi Yong,

	We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run.
    But We use DPDK1.7_rc1 package to validate VMware regression, It works fine.
.
1.[Test Environment]:
 - VMware ESXi 5.5;
 - 2 VM
 - FC20 on Host / FC20-64 on VM
 - Crown Pass server (E2680 v2 ivy bridge )
 - Niantic 82599

2. [Test Topology]:
	Create 2VMs (Fedora 18, 64bit) .
    We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM. 
 	To connect with two VMs, we use one vswitch to connect two vmxnet3 interface.
    Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.
	The traffic flow for l2fwd/l3fwd is as below::
	Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (traffic generator)

3.[ Test Step]:

tar dpdk1.8.rc1 ,compile and run;

L2fwd:  ./build/l2fwd -c f -n 4 -- -p 0x3
L3fwd:  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"

4.[Error log]:

---VMware L2fwd:---

EAL:   0000:0b:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f678ae6e000
EAL:   PCI memory mapped at 0x7f678af34000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:1b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   PCI memory mapped at 0x7f678af33000
EAL:   PCI memory mapped at 0x7f678af32000
EAL:   PCI memory mapped at 0x7f678af30000
Lcore 0: RX port 0
Lcore 1: RX port 1
Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 hw_ring=0x7f671b820080 dma_addr=0x100020080
PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
done: 
Port 0, MAC address: 90:E2:BA:4A:33:78

Initializing port 1... EAL: Error - exiting with code: 1
  Cause: rte_eth_tx_queue_setup:err=-22, port=1

---VMware L3fwd:---

EAL: TSC frequency is ~2793265 KHz
EAL: Master core 1 is ready (tid=9f49a880)
EAL: Core 2 is ready (tid=1d7f2700)
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   0000:0b:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f079f3e4000
EAL:   PCI memory mapped at 0x7f079f4aa000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:1b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   PCI memory mapped at 0x7f079f4a9000
EAL:   PCI memory mapped at 0x7f079f4a8000
EAL:   PCI memory mapped at 0x7f079f4a6000
Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route 0x03010100 / 24 (2)
LPM: Adding route 0x04010100 / 24 (3)
LPM: Adding route 0x05010100 / 24 (4)
LPM: Adding route 0x06010100 / 24 (5)
LPM: Adding route 0x07010100 / 24 (6)
LPM: Adding route 0x08010100 / 24 (7)
txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 hw_ring=0x7f079e5e5280 dma_addr=0x373e5280
PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.

Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1
  Cause: rte_eth_tx_queue_setup: err=-22, port=1


Can you help to recheck this patch with latest DPDK code?

Regards
Waterman 

-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
>Sent: Wednesday, October 22, 2014 6:10 AM
>To: Patel, Rashmin N; Stephen Hemminger
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Rashmin/Stephen,
>
>Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.
>
>Yong
>________________________________________
>From: dev <dev-bounces@dpdk.org> on behalf of Yong Wang <yongwang@vmware.com>
>Sent: Monday, October 13, 2014 2:00 PM
>To: Thomas Monjalon
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
>
>We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
>On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
>
>Yong
>________________________________________
>From: Thomas Monjalon <thomas.monjalon@6wind.com>
>Sent: Monday, October 13, 2014 1:29 PM
>To: Yong Wang
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi,
>
>2014-10-12 23:23, Yong Wang:
>> This patch series include various fixes and improvement to the
>> vmxnet3 pmd driver.
>>
>> Yong Wang (5):
>>   vmxnet3: Fix VLAN Rx stripping
>>   vmxnet3: Add VLAN Tx offload
>>   vmxnet3: Fix dev stop/restart bug
>>   vmxnet3: Add rx pkt check offloads
>>   vmxnet3: Some perf improvement on the rx path
>
>Please, could describe what is the performance gain for these patches?
>Benchmark numbers would be appreciated.
>
>Thanks
>--
>Thomas
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-13 18:42     ` Yong Wang
@ 2014-10-22 13:39       ` Stephen Hemminger
  2014-10-28 21:57         ` Yong Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Stephen Hemminger @ 2014-10-22 13:39 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

On Mon, 13 Oct 2014 18:42:18 +0000
Yong Wang <yongwang@vmware.com> wrote:

> Are you referring to the patch as a whole or your comment is about the reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am not sure I get your comments here.  This patch simply fixes a bug on the rx vlan stripping path (where valid vlan_tci stripped is overwritten unconditionally later on the rx path in the original vmxnet3 pmd driver). All the other pmd drivers are doing the same thing in terms of translating descriptor status to rte_mbuf flags for vlan stripping.

I was thinking that there are many fields in a pktmbuf and rather than individually
setting them (like tci). The code should call the common rte_pktmbuf_reset before setting
the fields.  That way when someone adds a field to mbuf they don't have to chasing
through every driver that does it's own initialization.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-22  7:07       ` Cao, Waterman
@ 2014-10-28 14:40         ` Thomas Monjalon
  2014-10-28 19:59           ` Yong Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2014-10-28 14:40 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

Hi Yong,

Is there any progress with this patchset?

Thanks
-- 
Thomas

2014-10-22 07:07, Cao, Waterman:
> Hi Yong,
> 
> 	We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run.
>     But We use DPDK1.7_rc1 package to validate VMware regression, It works fine.
> .
> 1.[Test Environment]:
>  - VMware ESXi 5.5;
>  - 2 VM
>  - FC20 on Host / FC20-64 on VM
>  - Crown Pass server (E2680 v2 ivy bridge )
>  - Niantic 82599
> 
> 2. [Test Topology]:
> 	Create 2VMs (Fedora 18, 64bit) .
>     We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM. 
>  	To connect with two VMs, we use one vswitch to connect two vmxnet3 interface.
>     Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.
> 	The traffic flow for l2fwd/l3fwd is as below::
> 	Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (traffic generator)
> 
> 3.[ Test Step]:
> 
> tar dpdk1.8.rc1 ,compile and run;
> 
> L2fwd:  ./build/l2fwd -c f -n 4 -- -p 0x3
> L3fwd:  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
> 
> 4.[Error log]:
> 
> ---VMware L2fwd:---
> 
> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
> EAL: PCI device 0000:13:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f678ae6e000
> EAL:   PCI memory mapped at 0x7f678af34000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   PCI memory mapped at 0x7f678af33000
> EAL:   PCI memory mapped at 0x7f678af32000
> EAL:   PCI memory mapped at 0x7f678af30000
> Lcore 0: RX port 0
> Lcore 1: RX port 1
> Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280
> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 hw_ring=0x7f671b820080 dma_addr=0x100020080
> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
> done: 
> Port 0, MAC address: 90:E2:BA:4A:33:78
> 
> Initializing port 1... EAL: Error - exiting with code: 1
>   Cause: rte_eth_tx_queue_setup:err=-22, port=1
> 
> ---VMware L3fwd:---
> 
> EAL: TSC frequency is ~2793265 KHz
> EAL: Master core 1 is ready (tid=9f49a880)
> EAL: Core 2 is ready (tid=1d7f2700)
> EAL: PCI device 0000:0b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
> EAL: PCI device 0000:13:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f079f3e4000
> EAL:   PCI memory mapped at 0x7f079f4aa000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   PCI memory mapped at 0x7f079f4a9000
> EAL:   PCI memory mapped at 0x7f079f4a8000
> EAL:   PCI memory mapped at 0x7f079f4a6000
> Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0
> LPM: Adding route 0x01010100 / 24 (0)
> LPM: Adding route 0x02010100 / 24 (1)
> LPM: Adding route 0x03010100 / 24 (2)
> LPM: Adding route 0x04010100 / 24 (3)
> LPM: Adding route 0x05010100 / 24 (4)
> LPM: Adding route 0x06010100 / 24 (5)
> LPM: Adding route 0x07010100 / 24 (6)
> LPM: Adding route 0x08010100 / 24 (7)
> txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 hw_ring=0x7f079e5e5280 dma_addr=0x373e5280
> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
> 
> Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1
>   Cause: rte_eth_tx_queue_setup: err=-22, port=1
> 
> 
> Can you help to recheck this patch with latest DPDK code?
> 
> Regards
> Waterman 
> 
> -----Original Message-----
> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
> >Sent: Wednesday, October 22, 2014 6:10 AM
> >To: Patel, Rashmin N; Stephen Hemminger
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Rashmin/Stephen,
> >
> >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.
> >
> >Yong
> >________________________________________
> >From: dev <dev-bounces@dpdk.org> on behalf of Yong Wang <yongwang@vmware.com>
> >Sent: Monday, October 13, 2014 2:00 PM
> >To: Thomas Monjalon
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
> >
> >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
> >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
> >
> >Yong
> >________________________________________
> >From: Thomas Monjalon <thomas.monjalon@6wind.com>
> >Sent: Monday, October 13, 2014 1:29 PM
> >To: Yong Wang
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Hi,
> >
> >2014-10-12 23:23, Yong Wang:
> >> This patch series include various fixes and improvement to the
> >> vmxnet3 pmd driver.
> >>
> >> Yong Wang (5):
> >>   vmxnet3: Fix VLAN Rx stripping
> >>   vmxnet3: Add VLAN Tx offload
> >>   vmxnet3: Fix dev stop/restart bug
> >>   vmxnet3: Add rx pkt check offloads
> >>   vmxnet3: Some perf improvement on the rx path
> >
> >Please, could describe what is the performance gain for these patches?
> >Benchmark numbers would be appreciated.
> >
> >Thanks
> >--
> >Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-28 14:40         ` Thomas Monjalon
@ 2014-10-28 19:59           ` Yong Wang
  2014-10-29  0:33             ` Cao, Waterman
  0 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-28 19:59 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Thomas/Waterman,

I couldn't reproduce the reported issue on v1.8.0-rc1 and both l2fwd and l3fwd works fine using the same command posted.

# dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:0b:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=
0000:13:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=

Network devices using kernel driver
===================================
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active*

Other network devices
=====================
<none>

#  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
...
EAL: TSC frequency is ~2800101 KHz
EAL: Master core 1 is ready (tid=ee3c6840)
EAL: Core 2 is ready (tid=de1ff700)
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   0000:02:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   PCI memory mapped at 0x7f8bee3dd000
EAL:   PCI memory mapped at 0x7f8bee3dc000
EAL:   PCI memory mapped at 0x7f8bee3da000
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   PCI memory mapped at 0x7f8bee3d9000
EAL:   PCI memory mapped at 0x7f8bee3d8000
EAL:   PCI memory mapped at 0x7f8bee3d6000
Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:7E, Allocated mbuf pool on socket 0
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route 0x03010100 / 24 (2)
LPM: Adding route 0x04010100 / 24 (3)
LPM: Adding route 0x05010100 / 24 (4)
LPM: Adding route 0x06010100 / 24 (5)
LPM: Adding route 0x07010100 / 24 (6)
LPM: Adding route 0x08010100 / 24 (7)
txq=0,0,0 
Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:88, txq=1,0,0 

Initializing rx queues on lcore 1 ... rxq=0,0,0 
Initializing rx queues on lcore 2 ... rxq=1,0,0 
done: Port 0
done: Port 1
L3FWD: entering main loop on lcore 2
L3FWD:  -- lcoreid=2 portid=1 rxqueueid=0
L3FWD: entering main loop on lcore 1
L3FWD:  -- lcoreid=1 portid=0 rxqueueid=0

I don't have the exact setup but I suspect this is related as the errors looks like a tx queue param used is not supported by vmxnet3 backend.  The patchset does not touch the txq config path so it's not clear how it breaks rte_eth_tx_queue_setup().  So my question to Waterman:
(1) Is this a regression on the same branch, i.e. running the unpatched build works but failed with the patch applied?
(2) By any chance did you change the following struct in main.c for those sample programs to a different value, in particular txq_flags?

static const struct rte_eth_txconf tx_conf = {
        .tx_thresh = {
                .pthresh = TX_PTHRESH,
                .hthresh = TX_HTHRESH,
                .wthresh = TX_WTHRESH,
        },
        .tx_free_thresh = 0, /* Use PMD default values */
        .tx_rs_thresh = 0, /* Use PMD default values */
        .txq_flags = (ETH_TXQ_FLAGS_NOMULTSEGS |   <== any changes here?
                      ETH_TXQ_FLAGS_NOVLANOFFL |
                      ETH_TXQ_FLAGS_NOXSUMSCTP |
                      ETH_TXQ_FLAGS_NOXSUMUDP |
                      ETH_TXQ_FLAGS_NOXSUMTCP)
};

Thanks,
Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon@6wind.com>
Sent: Tuesday, October 28, 2014 7:40 AM
To: Yong Wang
Cc: dev@dpdk.org; Cao, Waterman
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi Yong,

Is there any progress with this patchset?

Thanks
--
Thomas

2014-10-22 07:07, Cao, Waterman:
> Hi Yong,
>
>       We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run.
>     But We use DPDK1.7_rc1 package to validate VMware regression, It works fine.
> .
> 1.[Test Environment]:
>  - VMware ESXi 5.5;
>  - 2 VM
>  - FC20 on Host / FC20-64 on VM
>  - Crown Pass server (E2680 v2 ivy bridge )
>  - Niantic 82599
>
> 2. [Test Topology]:
>       Create 2VMs (Fedora 18, 64bit) .
>     We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM.
>       To connect with two VMs, we use one vswitch to connect two vmxnet3 interface.
>     Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.
>       The traffic flow for l2fwd/l3fwd is as below::
>       Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (traffic generator)
>
> 3.[ Test Step]:
>
> tar dpdk1.8.rc1 ,compile and run;
>
> L2fwd:  ./build/l2fwd -c f -n 4 -- -p 0x3
> L3fwd:  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
>
> 4.[Error log]:
>
> ---VMware L2fwd:---
>
> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
> EAL: PCI device 0000:13:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f678ae6e000
> EAL:   PCI memory mapped at 0x7f678af34000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   PCI memory mapped at 0x7f678af33000
> EAL:   PCI memory mapped at 0x7f678af32000
> EAL:   PCI memory mapped at 0x7f678af30000
> Lcore 0: RX port 0
> Lcore 1: RX port 1
> Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280
> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 hw_ring=0x7f671b820080 dma_addr=0x100020080
> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
> done:
> Port 0, MAC address: 90:E2:BA:4A:33:78
>
> Initializing port 1... EAL: Error - exiting with code: 1
>   Cause: rte_eth_tx_queue_setup:err=-22, port=1
>
> ---VMware L3fwd:---
>
> EAL: TSC frequency is ~2793265 KHz
> EAL: Master core 1 is ready (tid=9f49a880)
> EAL: Core 2 is ready (tid=1d7f2700)
> EAL: PCI device 0000:0b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
> EAL: PCI device 0000:13:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f079f3e4000
> EAL:   PCI memory mapped at 0x7f079f4aa000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
> EAL:   PCI memory mapped at 0x7f079f4a9000
> EAL:   PCI memory mapped at 0x7f079f4a8000
> EAL:   PCI memory mapped at 0x7f079f4a6000
> Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0
> LPM: Adding route 0x01010100 / 24 (0)
> LPM: Adding route 0x02010100 / 24 (1)
> LPM: Adding route 0x03010100 / 24 (2)
> LPM: Adding route 0x04010100 / 24 (3)
> LPM: Adding route 0x05010100 / 24 (4)
> LPM: Adding route 0x06010100 / 24 (5)
> LPM: Adding route 0x07010100 / 24 (6)
> LPM: Adding route 0x08010100 / 24 (7)
> txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 hw_ring=0x7f079e5e5280 dma_addr=0x373e5280
> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
>
> Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1
>   Cause: rte_eth_tx_queue_setup: err=-22, port=1
>
>
> Can you help to recheck this patch with latest DPDK code?
>
> Regards
> Waterman
>
> -----Original Message-----
> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
> >Sent: Wednesday, October 22, 2014 6:10 AM
> >To: Patel, Rashmin N; Stephen Hemminger
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Rashmin/Stephen,
> >
> >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.
> >
> >Yong
> >________________________________________
> >From: dev <dev-bounces@dpdk.org> on behalf of Yong Wang <yongwang@vmware.com>
> >Sent: Monday, October 13, 2014 2:00 PM
> >To: Thomas Monjalon
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
> >
> >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
> >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
> >
> >Yong
> >________________________________________
> >From: Thomas Monjalon <thomas.monjalon@6wind.com>
> >Sent: Monday, October 13, 2014 1:29 PM
> >To: Yong Wang
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> >
> >Hi,
> >
> >2014-10-12 23:23, Yong Wang:
> >> This patch series include various fixes and improvement to the
> >> vmxnet3 pmd driver.
> >>
> >> Yong Wang (5):
> >>   vmxnet3: Fix VLAN Rx stripping
> >>   vmxnet3: Add VLAN Tx offload
> >>   vmxnet3: Fix dev stop/restart bug
> >>   vmxnet3: Add rx pkt check offloads
> >>   vmxnet3: Some perf improvement on the rx path
> >
> >Please, could describe what is the performance gain for these patches?
> >Benchmark numbers would be appreciated.
> >
> >Thanks
> >--
> >Thomas


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-22 13:39       ` Stephen Hemminger
@ 2014-10-28 21:57         ` Yong Wang
  2014-10-29  9:04           ` Bruce Richardson
  0 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-28 21:57 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

On 10/22/14, 6:39 AM, "Stephen Hemminger" <stephen@networkplumber.org>
wrote:


>On Mon, 13 Oct 2014 18:42:18 +0000
>Yong Wang <yongwang@vmware.com> wrote:
>
>> Are you referring to the patch as a whole or your comment is about the
>>reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am not
>>sure I get your comments here.  This patch simply fixes a bug on the rx
>>vlan stripping path (where valid vlan_tci stripped is overwritten
>>unconditionally later on the rx path in the original vmxnet3 pmd
>>driver). All the other pmd drivers are doing the same thing in terms of
>>translating descriptor status to rte_mbuf flags for vlan stripping.
>
>I was thinking that there are many fields in a pktmbuf and rather than
>individually
>setting them (like tci). The code should call the common
>rte_pktmbuf_reset before setting
>the fields.  That way when someone adds a field to mbuf they don't have
>to chasing
>through every driver that does it's own initialization.

Currently rte_pktmbuf_reset() is used in rte_pktmbuf_alloc() but looks
like most pmd drivers use rte_rxmbuf_alloc() to replenish rx buffers,
which directly calls __rte_mbuf_raw_alloc
() without calling rte_pktmbuf_reset(). How about we change that in a
separate patch to all pmd drivers so that we can keep their behavior
consistent?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-28 19:59           ` Yong Wang
@ 2014-10-29  0:33             ` Cao, Waterman
  0 siblings, 0 replies; 26+ messages in thread
From: Cao, Waterman @ 2014-10-29  0:33 UTC (permalink / raw)
  To: Yong Wang, Thomas Monjalon; +Cc: dev

Hi Yong,

Let us recheck it again with your instruction.
I will response your questions once we get result.

Thanks
Waterman 


>-----Original Message-----
>From: Yong Wang [mailto:yongwang@vmware.com] 
>Sent: Wednesday, October 29, 2014 3:59 AM
>To: Thomas Monjalon
>Cc: dev@dpdk.org; Cao, Waterman
>Subject: RE: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Thomas/Waterman,
>
>I couldn't reproduce the reported issue on v1.8.0-rc1 and both l2fwd and l3fwd works fine using the same command posted.
>
># dpdk_nic_bind.py --status
>
>Network devices using DPDK-compatible driver ============================================
>0000:0b:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=
>0000:13:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=
>
>Network devices using kernel driver
>===================================
>0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active*
>
>Other network devices
>=====================
><none>
>
>#  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
>...
>EAL: TSC frequency is ~2800101 KHz
>EAL: Master core 1 is ready (tid=ee3c6840)
>EAL: Core 2 is ready (tid=de1ff700)
>EAL: PCI device 0000:02:00.0 on NUMA socket -1
>EAL:   probe driver: 8086:100f rte_em_pmd
>EAL:   0000:02:00.0 not managed by UIO driver, skipping
>EAL: PCI device 0000:0b:00.0 on NUMA socket -1
>EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>EAL:   PCI memory mapped at 0x7f8bee3dd000
>EAL:   PCI memory mapped at 0x7f8bee3dc000
>EAL:   PCI memory mapped at 0x7f8bee3da000
>EAL: PCI device 0000:13:00.0 on NUMA socket -1
>EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>EAL:   PCI memory mapped at 0x7f8bee3d9000
>EAL:   PCI memory mapped at 0x7f8bee3d8000
>EAL:   PCI memory mapped at 0x7f8bee3d6000
>Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:7E, Allocated mbuf pool on socket 0
>LPM: Adding route 0x01010100 / 24 (0)
>LPM: Adding route 0x02010100 / 24 (1)
>LPM: Adding route 0x03010100 / 24 (2)
>LPM: Adding route 0x04010100 / 24 (3)
>LPM: Adding route 0x05010100 / 24 (4)
>LPM: Adding route 0x06010100 / 24 (5)
>LPM: Adding route 0x07010100 / 24 (6)
>LPM: Adding route 0x08010100 / 24 (7)
>txq=0,0,0
>Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:88, txq=1,0,0 
>
>Initializing rx queues on lcore 1 ... rxq=0,0,0 Initializing rx queues on lcore 2 ... rxq=1,0,0
>done: Port 0
>done: Port 1
>L3FWD: entering main loop on lcore 2
>L3FWD:  -- lcoreid=2 portid=1 rxqueueid=0
>L3FWD: entering main loop on lcore 1
>L3FWD:  -- lcoreid=1 portid=0 rxqueueid=0
>
>I don't have the exact setup but I suspect this is related as the errors looks like a tx queue param used is not supported by vmxnet3 backend.  The patchset does not touch the txq config path so it's not clear how it breaks rte_eth_tx_queue_setup().  So my question to Waterman:
>(1) Is this a regression on the same branch, i.e. running the unpatched build works but failed with the patch applied?
>(2) By any chance did you change the following struct in main.c for those sample programs to a different value, in particular txq_flags?
>
>static const struct rte_eth_txconf tx_conf = {
>        .tx_thresh = {
>                .pthresh = TX_PTHRESH,
>                .hthresh = TX_HTHRESH,
>                .wthresh = TX_WTHRESH,
>        },
>        .tx_free_thresh = 0, /* Use PMD default values */
>        .tx_rs_thresh = 0, /* Use PMD default values */
>        .txq_flags = (ETH_TXQ_FLAGS_NOMULTSEGS |   <== any changes here?
>                      ETH_TXQ_FLAGS_NOVLANOFFL |
>                      ETH_TXQ_FLAGS_NOXSUMSCTP |
>                      ETH_TXQ_FLAGS_NOXSUMUDP |
>                      ETH_TXQ_FLAGS_NOXSUMTCP) };
>
>Thanks,
>Yong
>________________________________________
>From: Thomas Monjalon <thomas.monjalon@6wind.com>
>Sent: Tuesday, October 28, 2014 7:40 AM
>To: Yong Wang
>Cc: dev@dpdk.org; Cao, Waterman
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi Yong,
>
>Is there any progress with this patchset?
>
>Thanks
>--
>Thomas
>
>2014-10-22 07:07, Cao, Waterman:
>> Hi Yong,
>>
>>       We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run.
>>     But We use DPDK1.7_rc1 package to validate VMware regression, It works fine.
>> .
>> 1.[Test Environment]:
>>  - VMware ESXi 5.5;
>>  - 2 VM
>>  - FC20 on Host / FC20-64 on VM
>>  - Crown Pass server (E2680 v2 ivy bridge )
>>  - Niantic 82599
>>
>> 2. [Test Topology]:
>>       Create 2VMs (Fedora 18, 64bit) .
>>     We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM.
>>       To connect with two VMs, we use one vswitch to connect two vmxnet3 interface.
>>     Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.
>>       The traffic flow for l2fwd/l3fwd is as below::
>>       Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. 
>> (traffic generator)
>>
>> 3.[ Test Step]:
>>
>> tar dpdk1.8.rc1 ,compile and run;
>>
>> L2fwd:  ./build/l2fwd -c f -n 4 -- -p 0x3
>> L3fwd:  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
>>
>> 4.[Error log]:
>>
>> ---VMware L2fwd:---
>>
>> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:13:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7f678ae6e000
>> EAL:   PCI memory mapped at 0x7f678af34000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   PCI memory mapped at 0x7f678af33000
>> EAL:   PCI memory mapped at 0x7f678af32000
>> EAL:   PCI memory mapped at 0x7f678af30000
>> Lcore 0: RX port 0
>> Lcore 1: RX port 1
>> Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): 
>> sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280
>> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>> PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
>> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 
>> hw_ring=0x7f671b820080 dma_addr=0x100020080
>> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
>> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
>> done:
>> Port 0, MAC address: 90:E2:BA:4A:33:78
>>
>> Initializing port 1... EAL: Error - exiting with code: 1
>>   Cause: rte_eth_tx_queue_setup:err=-22, port=1
>>
>> ---VMware L3fwd:---
>>
>> EAL: TSC frequency is ~2793265 KHz
>> EAL: Master core 1 is ready (tid=9f49a880)
>> EAL: Core 2 is ready (tid=1d7f2700)
>> EAL: PCI device 0000:0b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:13:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7f079f3e4000
>> EAL:   PCI memory mapped at 0x7f079f4aa000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   PCI memory mapped at 0x7f079f4a9000
>> EAL:   PCI memory mapped at 0x7f079f4a8000
>> EAL:   PCI memory mapped at 0x7f079f4a6000
>> Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  
>> Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0
>> LPM: Adding route 0x01010100 / 24 (0)
>> LPM: Adding route 0x02010100 / 24 (1)
>> LPM: Adding route 0x03010100 / 24 (2)
>> LPM: Adding route 0x04010100 / 24 (3)
>> LPM: Adding route 0x05010100 / 24 (4)
>> LPM: Adding route 0x06010100 / 24 (5)
>> LPM: Adding route 0x07010100 / 24 (6)
>> LPM: Adding route 0x08010100 / 24 (7)
>> txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 
>> hw_ring=0x7f079e5e5280 dma_addr=0x373e5280
>> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
>> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
>>
>> Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1
>>   Cause: rte_eth_tx_queue_setup: err=-22, port=1
>>
>>
>> Can you help to recheck this patch with latest DPDK code?
>>
>> Regards
>> Waterman
>>
>> -----Original Message-----
>> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
>> >Sent: Wednesday, October 22, 2014 6:10 AM
>> >To: Patel, Rashmin N; Stephen Hemminger
>> >Cc: dev@dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Rashmin/Stephen,
>> >
>> >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.
>> >
>> >Yong
>> >________________________________________
>> >From: dev <dev-bounces@dpdk.org> on behalf of Yong Wang 
>> ><yongwang@vmware.com>
>> >Sent: Monday, October 13, 2014 2:00 PM
>> >To: Thomas Monjalon
>> >Cc: dev@dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
>> >
>> >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
>> >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
>> >
>> >Yong
>> >________________________________________
>> >From: Thomas Monjalon <thomas.monjalon@6wind.com>
>> >Sent: Monday, October 13, 2014 1:29 PM
>> >To: Yong Wang
>> >Cc: dev@dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Hi,
>> >
>> >2014-10-12 23:23, Yong Wang:
>> >> This patch series include various fixes and improvement to the
>> >> vmxnet3 pmd driver.
>> >>
>> >> Yong Wang (5):
>> >>   vmxnet3: Fix VLAN Rx stripping
>> >>   vmxnet3: Add VLAN Tx offload
>> >>   vmxnet3: Fix dev stop/restart bug
>> >>   vmxnet3: Add rx pkt check offloads
>> >>   vmxnet3: Some perf improvement on the rx path
>> >
>> >Please, could describe what is the performance gain for these patches?
>> >Benchmark numbers would be appreciated.
>> >
>> >Thanks
>> >--
>> >Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-28 21:57         ` Yong Wang
@ 2014-10-29  9:04           ` Bruce Richardson
  2014-10-29  9:41             ` Thomas Monjalon
  0 siblings, 1 reply; 26+ messages in thread
From: Bruce Richardson @ 2014-10-29  9:04 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

On Tue, Oct 28, 2014 at 09:57:14PM +0000, Yong Wang wrote:
> On 10/22/14, 6:39 AM, "Stephen Hemminger" <stephen@networkplumber.org>
> wrote:
> 
> 
> >On Mon, 13 Oct 2014 18:42:18 +0000
> >Yong Wang <yongwang@vmware.com> wrote:
> >
> >> Are you referring to the patch as a whole or your comment is about the
> >>reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am not
> >>sure I get your comments here.  This patch simply fixes a bug on the rx
> >>vlan stripping path (where valid vlan_tci stripped is overwritten
> >>unconditionally later on the rx path in the original vmxnet3 pmd
> >>driver). All the other pmd drivers are doing the same thing in terms of
> >>translating descriptor status to rte_mbuf flags for vlan stripping.
> >
> >I was thinking that there are many fields in a pktmbuf and rather than
> >individually
> >setting them (like tci). The code should call the common
> >rte_pktmbuf_reset before setting
> >the fields.  That way when someone adds a field to mbuf they don't have
> >to chasing
> >through every driver that does it's own initialization.
> 
> Currently rte_pktmbuf_reset() is used in rte_pktmbuf_alloc() but looks
> like most pmd drivers use rte_rxmbuf_alloc() to replenish rx buffers,
> which directly calls __rte_mbuf_raw_alloc
> () without calling rte_pktmbuf_reset(). How about we change that in a
> separate patch to all pmd drivers so that we can keep their behavior
> consistent?
> 

We can look to do that, but we need to beware of performance regressions if 
we do so. Certainly the vector implementation of the ixgbe would be severely 
impacted performance-wise if such a change were made. However, code paths 
which are not as highly tuned, or which do not need to be as highly tuned 
could perhaps use the standard function.

The main reason for this regression is that reset will clear all fields of 
the mbuf, which would be wasted cycles for a number of the PMDs as they will 
later set some of the fields based on values in the receive descriptor.  
Basically, on descriptor rearm in a PMD, the only fields that need to be 
reset would be those not set by the copy of data from the descriptor.

/Bruce

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-29  9:04           ` Bruce Richardson
@ 2014-10-29  9:41             ` Thomas Monjalon
  2014-10-29 17:57               ` Yong Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2014-10-29  9:41 UTC (permalink / raw)
  To: Bruce Richardson, Yong Wang; +Cc: dev

2014-10-29 09:04, Bruce Richardson:
> On Tue, Oct 28, 2014 at 09:57:14PM +0000, Yong Wang wrote:
> > On 10/22/14, 6:39 AM, "Stephen Hemminger" <stephen@networkplumber.org>
> > wrote:
> > 
> > 
> > >On Mon, 13 Oct 2014 18:42:18 +0000
> > >Yong Wang <yongwang@vmware.com> wrote:
> > >
> > >> Are you referring to the patch as a whole or your comment is about the
> > >>reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am not
> > >>sure I get your comments here.  This patch simply fixes a bug on the rx
> > >>vlan stripping path (where valid vlan_tci stripped is overwritten
> > >>unconditionally later on the rx path in the original vmxnet3 pmd
> > >>driver). All the other pmd drivers are doing the same thing in terms of
> > >>translating descriptor status to rte_mbuf flags for vlan stripping.
> > >
> > >I was thinking that there are many fields in a pktmbuf and rather than
> > >individually
> > >setting them (like tci). The code should call the common
> > >rte_pktmbuf_reset before setting
> > >the fields.  That way when someone adds a field to mbuf they don't have
> > >to chasing
> > >through every driver that does it's own initialization.
> > 
> > Currently rte_pktmbuf_reset() is used in rte_pktmbuf_alloc() but looks
> > like most pmd drivers use rte_rxmbuf_alloc() to replenish rx buffers,
> > which directly calls __rte_mbuf_raw_alloc
> > () without calling rte_pktmbuf_reset(). How about we change that in a
> > separate patch to all pmd drivers so that we can keep their behavior
> > consistent?
> > 
> 
> We can look to do that, but we need to beware of performance regressions if 
> we do so. Certainly the vector implementation of the ixgbe would be severely 
> impacted performance-wise if such a change were made. However, code paths 
> which are not as highly tuned, or which do not need to be as highly tuned 
> could perhaps use the standard function.
> 
> The main reason for this regression is that reset will clear all fields of 
> the mbuf, which would be wasted cycles for a number of the PMDs as they will 
> later set some of the fields based on values in the receive descriptor.  
> Basically, on descriptor rearm in a PMD, the only fields that need to be 
> reset would be those not set by the copy of data from the descriptor.

This is typically a trade-off situation.
I think that we should prefer the performance.

-- 
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-29  9:41             ` Thomas Monjalon
@ 2014-10-29 17:57               ` Yong Wang
  2014-10-29 18:51                 ` Thomas Monjalon
  0 siblings, 1 reply; 26+ messages in thread
From: Yong Wang @ 2014-10-29 17:57 UTC (permalink / raw)
  To: Thomas Monjalon, Bruce Richardson; +Cc: dev

Sounds good to me but it does look like the rte_rxmbuf_alloc() could use
some comments to make it explicit that rte_pktmbuf_reset() is avoided by
design for the reasons that Bruce described.  Furthermore,
rte_rxmbuf_alloc() is duplicated in almost all the pmd drivers.  Will it
make sense to promote it to a public API?  Just a thought.

Yong

On 10/29/14, 2:41 AM, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:

>2014-10-29 09:04, Bruce Richardson:
>> On Tue, Oct 28, 2014 at 09:57:14PM +0000, Yong Wang wrote:
>> > On 10/22/14, 6:39 AM, "Stephen Hemminger" <stephen@networkplumber.org>
>> > wrote:
>> > 
>> > 
>> > >On Mon, 13 Oct 2014 18:42:18 +0000
>> > >Yong Wang <yongwang@vmware.com> wrote:
>> > >
>> > >> Are you referring to the patch as a whole or your comment is about
>>the
>> > >>reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am
>>not
>> > >>sure I get your comments here.  This patch simply fixes a bug on
>>the rx
>> > >>vlan stripping path (where valid vlan_tci stripped is overwritten
>> > >>unconditionally later on the rx path in the original vmxnet3 pmd
>> > >>driver). All the other pmd drivers are doing the same thing in
>>terms of
>> > >>translating descriptor status to rte_mbuf flags for vlan stripping.
>> > >
>> > >I was thinking that there are many fields in a pktmbuf and rather
>>than
>> > >individually
>> > >setting them (like tci). The code should call the common
>> > >rte_pktmbuf_reset before setting
>> > >the fields.  That way when someone adds a field to mbuf they don't
>>have
>> > >to chasing
>> > >through every driver that does it's own initialization.
>> > 
>> > Currently rte_pktmbuf_reset() is used in rte_pktmbuf_alloc() but looks
>> > like most pmd drivers use rte_rxmbuf_alloc() to replenish rx buffers,
>> > which directly calls __rte_mbuf_raw_alloc
>> > () without calling rte_pktmbuf_reset(). How about we change that in a
>> > separate patch to all pmd drivers so that we can keep their behavior
>> > consistent?
>> > 
>> 
>> We can look to do that, but we need to beware of performance
>>regressions if 
>> we do so. Certainly the vector implementation of the ixgbe would be
>>severely 
>> impacted performance-wise if such a change were made. However, code
>>paths 
>> which are not as highly tuned, or which do not need to be as highly
>>tuned 
>> could perhaps use the standard function.
>> 
>> The main reason for this regression is that reset will clear all fields
>>of 
>> the mbuf, which would be wasted cycles for a number of the PMDs as they
>>will 
>> later set some of the fields based on values in the receive descriptor.
>> 
>> Basically, on descriptor rearm in a PMD, the only fields that need to
>>be 
>> reset would be those not set by the copy of data from the descriptor.
>
>This is typically a trade-off situation.
>I think that we should prefer the performance.
>
>-- 
>Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping
  2014-10-29 17:57               ` Yong Wang
@ 2014-10-29 18:51                 ` Thomas Monjalon
  0 siblings, 0 replies; 26+ messages in thread
From: Thomas Monjalon @ 2014-10-29 18:51 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

2014-10-29 17:57, Yong Wang:
> Sounds good to me but it does look like the rte_rxmbuf_alloc() could use
> some comments to make it explicit that rte_pktmbuf_reset() is avoided by
> design for the reasons that Bruce described.  Furthermore,
> rte_rxmbuf_alloc() is duplicated in almost all the pmd drivers.  Will it
> make sense to promote it to a public API?  Just a thought.

Yes, it makes sense.


> On 10/29/14, 2:41 AM, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:
> 
> >2014-10-29 09:04, Bruce Richardson:
> >> On Tue, Oct 28, 2014 at 09:57:14PM +0000, Yong Wang wrote:
> >> > On 10/22/14, 6:39 AM, "Stephen Hemminger" <stephen@networkplumber.org>
> >> > wrote:
> >> > 
> >> > 
> >> > >On Mon, 13 Oct 2014 18:42:18 +0000
> >> > >Yong Wang <yongwang@vmware.com> wrote:
> >> > >
> >> > >> Are you referring to the patch as a whole or your comment is about
> >>the
> >> > >>reset of vlan_tci on the "else" (no vlan tags stripped) path?  I am
> >>not
> >> > >>sure I get your comments here.  This patch simply fixes a bug on
> >>the rx
> >> > >>vlan stripping path (where valid vlan_tci stripped is overwritten
> >> > >>unconditionally later on the rx path in the original vmxnet3 pmd
> >> > >>driver). All the other pmd drivers are doing the same thing in
> >>terms of
> >> > >>translating descriptor status to rte_mbuf flags for vlan stripping.
> >> > >
> >> > >I was thinking that there are many fields in a pktmbuf and rather
> >>than
> >> > >individually
> >> > >setting them (like tci). The code should call the common
> >> > >rte_pktmbuf_reset before setting
> >> > >the fields.  That way when someone adds a field to mbuf they don't
> >>have
> >> > >to chasing
> >> > >through every driver that does it's own initialization.
> >> > 
> >> > Currently rte_pktmbuf_reset() is used in rte_pktmbuf_alloc() but looks
> >> > like most pmd drivers use rte_rxmbuf_alloc() to replenish rx buffers,
> >> > which directly calls __rte_mbuf_raw_alloc
> >> > () without calling rte_pktmbuf_reset(). How about we change that in a
> >> > separate patch to all pmd drivers so that we can keep their behavior
> >> > consistent?
> >> > 
> >> 
> >> We can look to do that, but we need to beware of performance
> >>regressions if 
> >> we do so. Certainly the vector implementation of the ixgbe would be
> >>severely 
> >> impacted performance-wise if such a change were made. However, code
> >>paths 
> >> which are not as highly tuned, or which do not need to be as highly
> >>tuned 
> >> could perhaps use the standard function.
> >> 
> >> The main reason for this regression is that reset will clear all fields
> >>of 
> >> the mbuf, which would be wasted cycles for a number of the PMDs as they
> >>will 
> >> later set some of the fields based on values in the receive descriptor.
> >> 
> >> Basically, on descriptor rearm in a PMD, the only fields that need to
> >>be 
> >> reset would be those not set by the copy of data from the descriptor.
> >
> >This is typically a trade-off situation.
> >I think that we should prefer the performance.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
                   ` (5 preceding siblings ...)
  2014-10-13 20:29 ` [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thomas Monjalon
@ 2014-11-04  5:57 ` Zhang, XiaonanX
  2014-11-04 22:50   ` Thomas Monjalon
  6 siblings, 1 reply; 26+ messages in thread
From: Zhang, XiaonanX @ 2014-11-04  5:57 UTC (permalink / raw)
  To: Yong Wang, dev

Tested-by: Xiaonan Zhang <xiaonanx.zhang@intel.com>

- Tested Commit: Yong Wang
- OS: Fedora20 3.15.8-200.fc20.x86_64
- GCC: gcc version 4.8.3 20140624
- CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb]
- Default x86_64-native-linuxapp-gcc configuration
- Total 6 cases, 6 passed, 0 failed
- Test Environment setup

- Topology #1: Create 2VMs (Fedora 20, 64bit);for each VM, pass through one physical port(Niantic 82599) to VM, and also create one virtual device: vmxnet3 in VM. Between two VMs, use one vswitch to connect 2 vmxnet3. In summary, PF1 
               and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.The traffic flow for l2fwd/l3fwd is as below:                            
               Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. 
- Topology #2: Create 1VM (Fedora 20, 64bit), on this VM, created 2 vmxnet3, called vmxnet3A, vmxnet3B; create 2 vswitch, vswitchA connecting PF1 and vmxnet3A, while vswitchB connecting PF2 and vmxnet3B. The traffic flow is as below:
               Ixia -> PF1 -> vswitchA -> vmxnet3A -> vmxnet3B -> vswitchB -> PF2 -> Ixia.

- Test Case1: L2fwd with Topology#1 
  Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio).
               Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
               if any unexpected behavior, such as no receives after N packets. 
  Command / instruction:
                To run the l2fwd example in 2VMs:
                                ./build/l2fwd -c f -n 4 -- -p 0x3
- Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC. Check if ixia port2 have received the 5 packets.
  Expected test result:
                Passed

- Test Case2: L3fwd-VF with Topology#1
  Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio)
               Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
               if any unexpected behavior, such as no receives after N packets.
  Command / instruction:
                To run the l3fwd-vf example in 2VMs:
                                ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
- Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.
  Expected test result:
                Passed

- Test Case3: L2fwd with Topology#2
  Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
               Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
               if any unexpected behavior, such as no receives after N packets.
  Command / instruction:
                To run the l2fwd example in VM1:
                                ./build/l2fwd -c f -n 4 -- -p 0x3
- Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0 (vmxnet3A), and the flow should have port0's MAC as destination MAC. Check if ixia port2 have received the 5 packets. Similar things need to be done at ixia port2.
  Expected test result:
                Passed

- Test Case4: L3fwd-VF with Topology#2
  Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).  
               Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
               if any unexpected behavior, such as no receives after N packets.
  Command / instruction:
                To run the l3fwd-vf example in VM1:
                                ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
- Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0(vmxnet3A), and the flow should have port0's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.

  Expected test result:
                Passed

- Test Case5: Timer test with Topology#2
  Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
  Command / instruction:
                Build timer sample and run the sample:
                                ./build/timer -c f -n 4
- Test IXIA Flow prerequisite: N.A.
		
  Expected test result:
                Passed

- Test Case6: Testpmd basic with Topology#2
  Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igbuio).
               Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
               if any unexpected behavior, such as no receives after N packets.
  Command / instruction:
                Run testpmd(e.g:/x86_64-native-linuxapp-gcc/app/testpmd) with below command lines:
                                ./testpmd -c f -n 4 -- --txqflags=0xf01 -i
		    Clean environment and start the forwarding. Need check the port information and clear port statics by using below commands:
                Testpmd>show port info all
                Testpmd>clear port stats all
                Testpmd>show port stats all
                Testpmd>set fwd mac
                Testpmd>start
- Test IXIA Flow prerequisite: N.A.
  Expected test result:
                Passed

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
Sent: Monday, October 13, 2014 2:23 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

This patch series include various fixes and improvement to the
vmxnet3 pmd driver.

Yong Wang (5):
  vmxnet3: Fix VLAN Rx stripping
  vmxnet3: Add VLAN Tx offload
  vmxnet3: Fix dev stop/restart bug
  vmxnet3: Add rx pkt check offloads
  vmxnet3: Some perf improvement on the rx path

 lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 310 +++++++++++++++++++++-------------
 1 file changed, 195 insertions(+), 115 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-11-04  5:57 ` Zhang, XiaonanX
@ 2014-11-04 22:50   ` Thomas Monjalon
  2014-11-05  5:26     ` Cao, Waterman
  0 siblings, 1 reply; 26+ messages in thread
From: Thomas Monjalon @ 2014-11-04 22:50 UTC (permalink / raw)
  To: Zhang, XiaonanX; +Cc: dev

Hi,

These tests don't seem related to the patchset.
It would be more interesting to test vlan, stop/restart, Rx checks
and Rx performance improvement.

-- 
Thomas


2014-11-04 05:57, Zhang, XiaonanX:
> Tested-by: Xiaonan Zhang <xiaonanx.zhang@intel.com>
> 
> - Tested Commit: Yong Wang
> - OS: Fedora20 3.15.8-200.fc20.x86_64
> - GCC: gcc version 4.8.3 20140624
> - CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
> - NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb]
> - Default x86_64-native-linuxapp-gcc configuration
> - Total 6 cases, 6 passed, 0 failed
> - Test Environment setup
> 
> - Topology #1: Create 2VMs (Fedora 20, 64bit);for each VM, pass through one physical port(Niantic 82599) to VM, and also create one virtual device: vmxnet3 in VM. Between two VMs, use one vswitch to connect 2 vmxnet3. In summary, PF1 
>                and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.The traffic flow for l2fwd/l3fwd is as below:                            
>                Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. 
> - Topology #2: Create 1VM (Fedora 20, 64bit), on this VM, created 2 vmxnet3, called vmxnet3A, vmxnet3B; create 2 vswitch, vswitchA connecting PF1 and vmxnet3A, while vswitchB connecting PF2 and vmxnet3B. The traffic flow is as below:
>                Ixia -> PF1 -> vswitchA -> vmxnet3A -> vmxnet3B -> vswitchB -> PF2 -> Ixia.
> 
> - Test Case1: L2fwd with Topology#1 
>   Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio).
>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>                if any unexpected behavior, such as no receives after N packets. 
>   Command / instruction:
>                 To run the l2fwd example in 2VMs:
>                                 ./build/l2fwd -c f -n 4 -- -p 0x3
> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC. Check if ixia port2 have received the 5 packets.
>   Expected test result:
>                 Passed
> 
> - Test Case2: L3fwd-VF with Topology#1
>   Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio)
>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>                if any unexpected behavior, such as no receives after N packets.
>   Command / instruction:
>                 To run the l3fwd-vf example in 2VMs:
>                                 ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.
>   Expected test result:
>                 Passed
> 
> - Test Case3: L2fwd with Topology#2
>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>                if any unexpected behavior, such as no receives after N packets.
>   Command / instruction:
>                 To run the l2fwd example in VM1:
>                                 ./build/l2fwd -c f -n 4 -- -p 0x3
> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0 (vmxnet3A), and the flow should have port0's MAC as destination MAC. Check if ixia port2 have received the 5 packets. Similar things need to be done at ixia port2.
>   Expected test result:
>                 Passed
> 
> - Test Case4: L3fwd-VF with Topology#2
>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).  
>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>                if any unexpected behavior, such as no receives after N packets.
>   Command / instruction:
>                 To run the l3fwd-vf example in VM1:
>                                 ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0(vmxnet3A), and the flow should have port0's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.
> 
>   Expected test result:
>                 Passed
> 
> - Test Case5: Timer test with Topology#2
>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
>   Command / instruction:
>                 Build timer sample and run the sample:
>                                 ./build/timer -c f -n 4
> - Test IXIA Flow prerequisite: N.A.
> 		
>   Expected test result:
>                 Passed
> 
> - Test Case6: Testpmd basic with Topology#2
>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igbuio).
>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>                if any unexpected behavior, such as no receives after N packets.
>   Command / instruction:
>                 Run testpmd(e.g:/x86_64-native-linuxapp-gcc/app/testpmd) with below command lines:
>                                 ./testpmd -c f -n 4 -- --txqflags=0xf01 -i
> 		    Clean environment and start the forwarding. Need check the port information and clear port statics by using below commands:
>                 Testpmd>show port info all
>                 Testpmd>clear port stats all
>                 Testpmd>show port stats all
>                 Testpmd>set fwd mac
>                 Testpmd>start
> - Test IXIA Flow prerequisite: N.A.
>   Expected test result:
>                 Passed
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
> Sent: Monday, October 13, 2014 2:23 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
> 
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
> 
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path
> 
>  lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 310 +++++++++++++++++++++-------------
>  1 file changed, 195 insertions(+), 115 deletions(-)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path
  2014-10-13  6:23 ` [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path Yong Wang
@ 2014-11-05  0:13   ` Thomas Monjalon
  0 siblings, 0 replies; 26+ messages in thread
From: Thomas Monjalon @ 2014-11-05  0:13 UTC (permalink / raw)
  To: Yong Wang; +Cc: dev

2014-10-12 23:23, Yong Wang:
> Signed-off-by: Yong Wang <yongwang@vmware.com>

Please, could you give some explanations to put in the commit log?

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-10-13 21:00   ` Yong Wang
  2014-10-21 22:10     ` Yong Wang
@ 2014-11-05  1:32     ` Cao, Waterman
  1 sibling, 0 replies; 26+ messages in thread
From: Cao, Waterman @ 2014-11-05  1:32 UTC (permalink / raw)
  To: 'Yong Wang', Thomas Monjalon; +Cc: dev

Hi Yong,

	We tested your patch with VMWare ESX 5.5.
	It works fine with R1.8 RC1. 
	You can find more details from Xiaonan's reports.

Regards

Waterman 
>-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
>Sent: Tuesday, October 14, 2014 5:00 AM
>To: Thomas Monjalon
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
>
>We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
>On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
>
>Yong
>________________________________________
>From: Thomas Monjalon <thomas.monjalon@6wind.com>
>Sent: Monday, October 13, 2014 1:29 PM
>To: Yong Wang
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi,
>
>2014-10-12 23:23, Yong Wang:
>> This patch series include various fixes and improvement to the
>> vmxnet3 pmd driver.
>>
>> Yong Wang (5):
>>   vmxnet3: Fix VLAN Rx stripping
>>   vmxnet3: Add VLAN Tx offload
>>   vmxnet3: Fix dev stop/restart bug
>>   vmxnet3: Add rx pkt check offloads
>>   vmxnet3: Some perf improvement on the rx path
>
>Please, could describe what is the performance gain for these patches?
>Benchmark numbers would be appreciated.
>
>Thanks
>--
>Thomas

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
Sent: Tuesday, October 14, 2014 5:00 AM
To: Thomas Monjalon
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.

We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket:
On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.

Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon@6wind.com>
Sent: Monday, October 13, 2014 1:29 PM
To: Yong Wang
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
>
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
--
Thomas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
  2014-11-04 22:50   ` Thomas Monjalon
@ 2014-11-05  5:26     ` Cao, Waterman
  0 siblings, 0 replies; 26+ messages in thread
From: Cao, Waterman @ 2014-11-05  5:26 UTC (permalink / raw)
  To: Thomas Monjalon, Zhang, XiaonanX; +Cc: dev

Hi Thomas,

	Yes. Xiaonan just want to confirm if yong's patch doesn't impact original functionality and regression test cases under VMware.
	Xiaonan will check with yong and see if we can add some test in the regression to new changes.

	Waterman 

-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
>Sent: Wednesday, November 5, 2014 6:50 AM
>To: Zhang, XiaonanX
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi,
>
>These tests don't seem related to the patchset.
>It would be more interesting to test vlan, stop/restart, Rx checks and Rx performance improvement.
>
>--
>Thomas
>
>
>2014-11-04 05:57, Zhang, XiaonanX:
>> Tested-by: Xiaonan Zhang <xiaonanx.zhang@intel.com>
>> 
>> - Tested Commit: Yong Wang
>> - OS: Fedora20 3.15.8-200.fc20.x86_64
>> - GCC: gcc version 4.8.3 20140624
>> - CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
>> - NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb]
>> - Default x86_64-native-linuxapp-gcc configuration
>> - Total 6 cases, 6 passed, 0 failed
>> - Test Environment setup
>> 
>> - Topology #1: Create 2VMs (Fedora 20, 64bit);for each VM, pass through one physical port(Niantic 82599) to VM, and also create one virtual device: vmxnet3 in VM. Between two VMs, use one vswitch to connect 2 vmxnet3. In summary, PF1 
>>                and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.The traffic flow for l2fwd/l3fwd is as below:                            
>>                Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. 
>> - Topology #2: Create 1VM (Fedora 20, 64bit), on this VM, created 2 vmxnet3, called vmxnet3A, vmxnet3B; create 2 vswitch, vswitchA connecting PF1 and vmxnet3A, while vswitchB connecting PF2 and vmxnet3B. The traffic flow is as below:
>>                Ixia -> PF1 -> vswitchA -> vmxnet3A -> vmxnet3B -> vswitchB -> PF2 -> Ixia.
>> 
>> - Test Case1: L2fwd with Topology#1 
>>   Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio).
>>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>>                if any unexpected behavior, such as no receives after N packets. 
>>   Command / instruction:
>>                 To run the l2fwd example in 2VMs:
>>                                 ./build/l2fwd -c f -n 4 -- -p 0x3
>> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC. Check if ixia port2 have received the 5 packets.
>>   Expected test result:
>>                 Passed
>> 
>> - Test Case2: L3fwd-VF with Topology#1
>>   Description: Set up topology#1(in prerequisite session), and bind PF1, PF2, Vmxnet3A, vmxnet3B to DPDK poll-mode driver (igb_uio)
>>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>>                if any unexpected behavior, such as no receives after N packets.
>>   Command / instruction:
>>                 To run the l3fwd-vf example in 2VMs:
>>                                 ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
>> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to PF1, and the flow should have PF1's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.
>>   Expected test result:
>>                 Passed
>> 
>> - Test Case3: L2fwd with Topology#2
>>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
>>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>>                if any unexpected behavior, such as no receives after N packets.
>>   Command / instruction:
>>                 To run the l2fwd example in VM1:
>>                                 ./build/l2fwd -c f -n 4 -- -p 0x3
>> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0 (vmxnet3A), and the flow should have port0's MAC as destination MAC. Check if ixia port2 have received the 5 packets. Similar things need to be done at ixia port2.
>>   Expected test result:
>>                 Passed
>> 
>> - Test Case4: L3fwd-VF with Topology#2
>>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).  
>>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>>                if any unexpected behavior, such as no receives after N packets.
>>   Command / instruction:
>>                 To run the l3fwd-vf example in VM1:
>>                                 ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 --config "(0,0,1),(1,0,2)"
>> - Test IXIA Flow prerequisite: Ixia port1 sends 5 packets to port0(vmxnet3A), and the flow should have port0's MAC as destination MAC and have 2.1.1.x as destination IP. Check if ixia port2 have received the 5 packets.
>> 
>>   Expected test result:
>>                 Passed
>> 
>> - Test Case5: Timer test with Topology#2
>>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igb_uio).
>>   Command / instruction:
>>                 Build timer sample and run the sample:
>>                                 ./build/timer -c f -n 4
>> - Test IXIA Flow prerequisite: N.A.
>> 		
>>   Expected test result:
>>                 Passed
>> 
>> - Test Case6: Testpmd basic with Topology#2
>>   Description: Set up topology#2(in prerequisite session), and bind vmxnet3A and vmxnet3B to DPDK poll-mode driver (igbuio).
>>                Increase the flow at line rate (uni-directional traffic), send the flow at different packet size (64bytes, 128bytes, 256bytes, 512bytes, 1024bytes, 1280bytes and 1518bytes) and check the received packets/rate to see  
>>                if any unexpected behavior, such as no receives after N packets.
>>   Command / instruction:
>>                 Run testpmd(e.g:/x86_64-native-linuxapp-gcc/app/testpmd) with below command lines:
>>                                 ./testpmd -c f -n 4 -- --txqflags=0xf01 -i
>> 		    Clean environment and start the forwarding. Need check the port information and clear port statics by using below commands:
>>                 Testpmd>show port info all
>>                 Testpmd>clear port stats all
>>                 Testpmd>show port stats all
>>                 Testpmd>set fwd mac
>>                 Testpmd>start
>> - Test IXIA Flow prerequisite: N.A.
>>   Expected test result:
>>                 Passed
>> 
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang
>> Sent: Monday, October 13, 2014 2:23 PM
>> To: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> 
>> This patch series include various fixes and improvement to the
>> vmxnet3 pmd driver.
>> 
>> Yong Wang (5):
>>   vmxnet3: Fix VLAN Rx stripping
>>   vmxnet3: Add VLAN Tx offload
>>   vmxnet3: Fix dev stop/restart bug
>>   vmxnet3: Add rx pkt check offloads
>>   vmxnet3: Some perf improvement on the rx path
>> 
>>  lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 310 +++++++++++++++++++++-------------
>>  1 file changed, 195 insertions(+), 115 deletions(-)

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2014-11-05  5:19 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-13  6:23 [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Yong Wang
2014-10-13  6:23 ` [dpdk-dev] [PATCH 1/5] vmxnet3: Fix VLAN Rx stripping Yong Wang
2014-10-13  9:31   ` Stephen Hemminger
2014-10-13 18:42     ` Yong Wang
2014-10-22 13:39       ` Stephen Hemminger
2014-10-28 21:57         ` Yong Wang
2014-10-29  9:04           ` Bruce Richardson
2014-10-29  9:41             ` Thomas Monjalon
2014-10-29 17:57               ` Yong Wang
2014-10-29 18:51                 ` Thomas Monjalon
2014-10-13  6:23 ` [dpdk-dev] [PATCH 2/5] vmxnet3: Add VLAN Tx offload Yong Wang
2014-10-13  6:23 ` [dpdk-dev] [PATCH 3/5] vmxnet3: Fix dev stop/restart bug Yong Wang
2014-10-13  6:23 ` [dpdk-dev] [PATCH 4/5] vmxnet3: Add rx pkt check offloads Yong Wang
2014-10-13  6:23 ` [dpdk-dev] [PATCH 5/5] vmxnet3: Some perf improvement on the rx path Yong Wang
2014-11-05  0:13   ` Thomas Monjalon
2014-10-13 20:29 ` [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thomas Monjalon
2014-10-13 21:00   ` Yong Wang
2014-10-21 22:10     ` Yong Wang
2014-10-22  7:07       ` Cao, Waterman
2014-10-28 14:40         ` Thomas Monjalon
2014-10-28 19:59           ` Yong Wang
2014-10-29  0:33             ` Cao, Waterman
2014-11-05  1:32     ` Cao, Waterman
2014-11-04  5:57 ` Zhang, XiaonanX
2014-11-04 22:50   ` Thomas Monjalon
2014-11-05  5:26     ` Cao, Waterman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).