* [dpdk-dev] [PATCH v4 1/6] vmxnet3: fix typos and remove unused struct
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 2/6] vmxnet3: restore tx data ring support Yong Wang
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
drivers/net/vmxnet3/base/includeCheck.h | 39 ---------------------------------
drivers/net/vmxnet3/base/vmxnet3_defs.h | 9 +-------
drivers/net/vmxnet3/vmxnet3_ethdev.c | 2 +-
drivers/net/vmxnet3/vmxnet3_ring.h | 13 -----------
drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 +-
5 files changed, 3 insertions(+), 62 deletions(-)
delete mode 100644 drivers/net/vmxnet3/base/includeCheck.h
diff --git a/drivers/net/vmxnet3/base/includeCheck.h b/drivers/net/vmxnet3/base/includeCheck.h
deleted file mode 100644
index 310cebe..0000000
--- a/drivers/net/vmxnet3/base/includeCheck.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _INCLUDECHECK_H
-#define _INCLUDECHECK_H
-
-#include "vmxnet3_osdep.h"
-
-#endif /* _INCLUDECHECK_H */
diff --git a/drivers/net/vmxnet3/base/vmxnet3_defs.h b/drivers/net/vmxnet3/base/vmxnet3_defs.h
index 2b56574..68ae8b6 100644
--- a/drivers/net/vmxnet3/base/vmxnet3_defs.h
+++ b/drivers/net/vmxnet3/base/vmxnet3_defs.h
@@ -35,14 +35,7 @@
#ifndef _VMXNET3_DEFS_H_
#define _VMXNET3_DEFS_H_
-#define INCLUDE_ALLOW_USERLEVEL
-#define INCLUDE_ALLOW_VMKERNEL
-#define INCLUDE_ALLOW_DISTRIBUTE
-#define INCLUDE_ALLOW_VMKDRIVERS
-#define INCLUDE_ALLOW_VMCORE
-#define INCLUDE_ALLOW_MODULE
-#include "includeCheck.h"
-
+#include "vmxnet3_osdep.h"
#include "upt1_defs.h"
/* all registers are 32 bit wide */
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index c363bf6..d90e62f 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -819,7 +819,7 @@ vmxnet3_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vid, int on)
else
VMXNET3_CLEAR_VFTABLE_ENTRY(hw->shadow_vfta, vid);
- /* don't change active filter if in promiscious mode */
+ /* don't change active filter if in promiscuous mode */
if (rxConf->rxMode & VMXNET3_RXM_PROMISC)
return 0;
diff --git a/drivers/net/vmxnet3/vmxnet3_ring.h b/drivers/net/vmxnet3/vmxnet3_ring.h
index 612487e..15b19e1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ring.h
+++ b/drivers/net/vmxnet3/vmxnet3_ring.h
@@ -130,18 +130,6 @@ struct vmxnet3_txq_stats {
uint64_t tx_ring_full;
};
-typedef struct vmxnet3_tx_ctx {
- int ip_type;
- bool is_vlan;
- bool is_cso;
-
- uint16_t evl_tag; /* only valid when is_vlan == TRUE */
- uint32_t eth_hdr_size; /* only valid for pkts requesting tso or csum
- * offloading */
- uint32_t ip_hdr_size;
- uint32_t l4_hdr_size;
-} vmxnet3_tx_ctx_t;
-
typedef struct vmxnet3_tx_queue {
struct vmxnet3_hw *hw;
struct vmxnet3_cmd_ring cmd_ring;
@@ -155,7 +143,6 @@ typedef struct vmxnet3_tx_queue {
uint8_t port_id; /**< Device port identifier. */
} vmxnet3_tx_queue_t;
-
struct vmxnet3_rxq_stats {
uint64_t drop_total;
uint64_t drop_err;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 4de5d89..a3154bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -598,7 +598,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(rxd->btype != VMXNET3_RXD_BTYPE_HEAD)) {
PMD_RX_LOG(DEBUG,
"Alert : Misbehaving device, incorrect "
- " buffer type used. iPacket dropped.");
+ " buffer type used. Packet dropped.");
rte_pktmbuf_free_seg(rbi->m);
goto rcd_done;
}
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v4 2/6] vmxnet3: restore tx data ring support
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 1/6] vmxnet3: fix typos and remove unused struct Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 3/6] vmxnet3: cleanup txNumDeferred usage Yong Wang
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e3c ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 5 +++++
drivers/net/vmxnet3/vmxnet3_rxtx.c | 17 ++++++++++++++++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 99de186..a23c8ac 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -15,6 +15,11 @@ EAL
Drivers
~~~~~~~
+* **vmxnet3: restore tx data ring.**
+
+ Tx data ring has been shown to improve small pkt forwarding performance
+ on vSphere environment.
+
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a3154bc..4ccab0e 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -348,6 +348,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t first2fill, avail, dw2;
struct rte_mbuf *txm = tx_pkts[nb_tx];
struct rte_mbuf *m_seg = txm;
+ int copy_size = 0;
/* Is this packet execessively fragmented, then drop */
if (unlikely(txm->nb_segs > VMXNET3_MAX_TXD_PER_PKT)) {
@@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
break;
}
+ if (txm->nb_segs == 1 && rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
+ struct Vmxnet3_TxDataDesc *tdd;
+
+ tdd = txq->data_ring.base + txq->cmd_ring.next2fill;
+ copy_size = rte_pktmbuf_pkt_len(txm);
+ rte_memcpy(tdd->data, rte_pktmbuf_mtod(txm, char *), copy_size);
+ }
+
/* use the previous gen bit for the SOP desc */
dw2 = (txq->cmd_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
first2fill = txq->cmd_ring.next2fill;
@@ -377,7 +386,13 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
transmit buffer size (16K) is greater than
maximum sizeof mbuf segment size. */
gdesc = txq->cmd_ring.base + txq->cmd_ring.next2fill;
- gdesc->txd.addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
+ if (copy_size)
+ gdesc->txd.addr = rte_cpu_to_le_64(txq->data_ring.basePA +
+ txq->cmd_ring.next2fill *
+ sizeof(struct Vmxnet3_TxDataDesc));
+ else
+ gdesc->txd.addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
+
gdesc->dword[2] = dw2 | m_seg->data_len;
gdesc->dword[3] = 0;
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v4 3/6] vmxnet3: cleanup txNumDeferred usage
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 1/6] vmxnet3: fix typos and remove unused struct Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 2/6] vmxnet3: restore tx data ring support Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 4/6] vmxnet3: add tx l4 cksum offload Yong Wang
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 4ccab0e..f3af2f2 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -332,6 +332,8 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_tx;
vmxnet3_tx_queue_t *txq = tx_queue;
struct vmxnet3_hw *hw = txq->hw;
+ Vmxnet3_TxQueueCtrl *txq_ctrl = &txq->shared->ctrl;
+ uint32_t deferred = rte_le_to_cpu_32(txq_ctrl->txNumDeferred);
if (unlikely(txq->stopped)) {
PMD_TX_LOG(DEBUG, "Tx queue is stopped.");
@@ -419,15 +421,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
rte_compiler_barrier();
gdesc->dword[2] ^= VMXNET3_TXD_GEN;
- txq->shared->ctrl.txNumDeferred++;
+ txq_ctrl->txNumDeferred = rte_cpu_to_le_32(++deferred);
nb_tx++;
}
- PMD_TX_LOG(DEBUG, "vmxnet3 txThreshold: %u", txq->shared->ctrl.txThreshold);
+ PMD_TX_LOG(DEBUG, "vmxnet3 txThreshold: %u", rte_le_to_cpu_32(txq_ctrl->txThreshold));
- if (txq->shared->ctrl.txNumDeferred >= txq->shared->ctrl.txThreshold) {
-
- txq->shared->ctrl.txNumDeferred = 0;
+ if (deferred >= rte_le_to_cpu_32(txq_ctrl->txThreshold)) {
+ txq_ctrl->txNumDeferred = 0;
/* Notify vSwitch that packets are available. */
VMXNET3_WRITE_BAR0_REG(hw, (VMXNET3_REG_TXPROD + txq->queue_id * VMXNET3_REG_ALIGN),
txq->cmd_ring.next2fill);
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v4 4/6] vmxnet3: add tx l4 cksum offload
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
` (2 preceding siblings ...)
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 3/6] vmxnet3: cleanup txNumDeferred usage Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 5/6] vmxnet3: add TSO support Yong Wang
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
Support TCP/UDP checksum offload.
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 3 +++
drivers/net/vmxnet3/vmxnet3_rxtx.c | 26 +++++++++++++++++++++++---
2 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index a23c8ac..58205fe 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -20,6 +20,9 @@ Drivers
Tx data ring has been shown to improve small pkt forwarding performance
on vSphere environment.
+* **vmxnet3: add tx l4 cksum offload.**
+
+ Support TCP/UDP checksum offload.
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index f3af2f2..2c1bc3c 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -415,7 +415,27 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
gdesc->txd.tci = txm->vlan_tci;
}
- /* TODO: Add transmit checksum offload here */
+ if (txm->ol_flags & PKT_TX_L4_MASK) {
+ gdesc->txd.om = VMXNET3_OM_CSUM;
+ gdesc->txd.hlen = txm->l2_len + txm->l3_len;
+
+ switch (txm->ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_TCP_CKSUM:
+ gdesc->txd.msscof = gdesc->txd.hlen + offsetof(struct tcp_hdr, cksum);
+ break;
+ case PKT_TX_UDP_CKSUM:
+ gdesc->txd.msscof = gdesc->txd.hlen + offsetof(struct udp_hdr, dgram_cksum);
+ break;
+ default:
+ PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
+ txm->ol_flags & PKT_TX_L4_MASK);
+ abort();
+ }
+ } else {
+ gdesc->txd.hlen = 0;
+ gdesc->txd.om = VMXNET3_OM_NONE;
+ gdesc->txd.msscof = 0;
+ }
/* flip the GEN bit on the SOP */
rte_compiler_barrier();
@@ -729,8 +749,8 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
if ((tx_conf->txq_flags & ETH_TXQ_FLAGS_NOXSUMS) !=
- ETH_TXQ_FLAGS_NOXSUMS) {
- PMD_INIT_LOG(ERR, "TX no support for checksum offload yet");
+ ETH_TXQ_FLAGS_NOXSUMSCTP) {
+ PMD_INIT_LOG(ERR, "SCTP checksum offload not supported");
return -EINVAL;
}
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v4 5/6] vmxnet3: add TSO support
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
` (3 preceding siblings ...)
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 4/6] vmxnet3: add tx l4 cksum offload Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-03-15 20:39 ` Thomas Monjalon
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 6/6] vmxnet3: announce device offload capability Yong Wang
2016-01-13 4:56 ` [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Stephen Hemminger
6 siblings, 1 reply; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 3 +
drivers/net/vmxnet3/vmxnet3_rxtx.c | 108 ++++++++++++++++++++++++++---------
2 files changed, 84 insertions(+), 27 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 58205fe..ae487bb 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -24,6 +24,9 @@ Drivers
Support TCP/UDP checksum offload.
+* **vmxnet3: add TSO support.**
+
+
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 2c1bc3c..103294a 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -295,27 +295,45 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
}
}
+static int
+vmxnet3_unmap_pkt(uint16_t eop_idx, vmxnet3_tx_queue_t *txq)
+{
+ int completed = 0;
+ struct rte_mbuf *mbuf;
+
+ /* Release cmd_ring descriptor and free mbuf */
+ VMXNET3_ASSERT(txq->cmd_ring.base[eop_idx].txd.eop == 1);
+
+ mbuf = txq->cmd_ring.buf_info[eop_idx].m;
+ if (mbuf == NULL)
+ rte_panic("EOP desc does not point to a valid mbuf");
+ rte_pktmbuf_free(mbuf);
+
+ txq->cmd_ring.buf_info[eop_idx].m = NULL;
+
+ while (txq->cmd_ring.next2comp != eop_idx) {
+ /* no out-of-order completion */
+ VMXNET3_ASSERT(txq->cmd_ring.base[txq->cmd_ring.next2comp].txd.cq == 0);
+ vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
+ completed++;
+ }
+
+ /* Mark the txd for which tcd was generated as completed */
+ vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
+
+ return completed + 1;
+}
+
static void
vmxnet3_tq_tx_complete(vmxnet3_tx_queue_t *txq)
{
int completed = 0;
- struct rte_mbuf *mbuf;
vmxnet3_comp_ring_t *comp_ring = &txq->comp_ring;
struct Vmxnet3_TxCompDesc *tcd = (struct Vmxnet3_TxCompDesc *)
(comp_ring->base + comp_ring->next2proc);
while (tcd->gen == comp_ring->gen) {
- /* Release cmd_ring descriptor and free mbuf */
- VMXNET3_ASSERT(txq->cmd_ring.base[tcd->txdIdx].txd.eop == 1);
- while (txq->cmd_ring.next2comp != tcd->txdIdx) {
- mbuf = txq->cmd_ring.buf_info[txq->cmd_ring.next2comp].m;
- txq->cmd_ring.buf_info[txq->cmd_ring.next2comp].m = NULL;
- rte_pktmbuf_free_seg(mbuf);
-
- /* Mark the txd for which tcd was generated as completed */
- vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
- completed++;
- }
+ completed += vmxnet3_unmap_pkt(tcd->txdIdx, txq);
vmxnet3_comp_ring_adv_next2proc(comp_ring);
tcd = (struct Vmxnet3_TxCompDesc *)(comp_ring->base +
@@ -351,21 +369,43 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
struct rte_mbuf *txm = tx_pkts[nb_tx];
struct rte_mbuf *m_seg = txm;
int copy_size = 0;
+ bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+ /* # of descriptors needed for a packet. */
+ unsigned count = txm->nb_segs;
- /* Is this packet execessively fragmented, then drop */
- if (unlikely(txm->nb_segs > VMXNET3_MAX_TXD_PER_PKT)) {
- ++txq->stats.drop_too_many_segs;
- ++txq->stats.drop_total;
+ avail = vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring);
+ if (count > avail) {
+ /* Is command ring full? */
+ if (unlikely(avail == 0)) {
+ PMD_TX_LOG(DEBUG, "No free ring descriptors");
+ txq->stats.tx_ring_full++;
+ txq->stats.drop_total += (nb_pkts - nb_tx);
+ break;
+ }
+
+ /* Command ring is not full but cannot handle the
+ * multi-segmented packet. Let's try the next packet
+ * in this case.
+ */
+ PMD_TX_LOG(DEBUG, "Running out of ring descriptors "
+ "(avail %d needed %d)", avail, count);
+ txq->stats.drop_total++;
+ if (tso)
+ txq->stats.drop_tso++;
rte_pktmbuf_free(txm);
- ++nb_tx;
+ nb_tx++;
continue;
}
- /* Is command ring full? */
- avail = vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring);
- if (txm->nb_segs > avail) {
- ++txq->stats.tx_ring_full;
- break;
+ /* Drop non-TSO packet that is excessively fragmented */
+ if (unlikely(!tso && count > VMXNET3_MAX_TXD_PER_PKT)) {
+ PMD_TX_LOG(ERROR, "Non-TSO packet cannot occupy more than %d tx "
+ "descriptors. Packet dropped.", VMXNET3_MAX_TXD_PER_PKT);
+ txq->stats.drop_too_many_segs++;
+ txq->stats.drop_total++;
+ rte_pktmbuf_free(txm);
+ nb_tx++;
+ continue;
}
if (txm->nb_segs == 1 && rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
@@ -382,11 +422,11 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
do {
/* Remember the transmit buffer for cleanup */
tbi = txq->cmd_ring.buf_info + txq->cmd_ring.next2fill;
- tbi->m = m_seg;
/* NB: the following assumes that VMXNET3 maximum
- transmit buffer size (16K) is greater than
- maximum sizeof mbuf segment size. */
+ * transmit buffer size (16K) is greater than
+ * maximum size of mbuf segment size.
+ */
gdesc = txq->cmd_ring.base + txq->cmd_ring.next2fill;
if (copy_size)
gdesc->txd.addr = rte_cpu_to_le_64(txq->data_ring.basePA +
@@ -405,6 +445,8 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
dw2 = txq->cmd_ring.gen << VMXNET3_TXD_GEN_SHIFT;
} while ((m_seg = m_seg->next) != NULL);
+ /* set the last buf_info for the pkt */
+ tbi->m = txm;
/* Update the EOP descriptor */
gdesc->dword[3] |= VMXNET3_TXD_EOP | VMXNET3_TXD_CQ;
@@ -415,7 +457,17 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
gdesc->txd.tci = txm->vlan_tci;
}
- if (txm->ol_flags & PKT_TX_L4_MASK) {
+ if (tso) {
+ uint16_t mss = txm->tso_segsz;
+
+ VMXNET3_ASSERT(mss > 0);
+
+ gdesc->txd.hlen = txm->l2_len + txm->l3_len + txm->l4_len;
+ gdesc->txd.om = VMXNET3_OM_TSO;
+ gdesc->txd.msscof = mss;
+
+ deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
+ } else if (txm->ol_flags & PKT_TX_L4_MASK) {
gdesc->txd.om = VMXNET3_OM_CSUM;
gdesc->txd.hlen = txm->l2_len + txm->l3_len;
@@ -431,17 +483,19 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
txm->ol_flags & PKT_TX_L4_MASK);
abort();
}
+ deferred++;
} else {
gdesc->txd.hlen = 0;
gdesc->txd.om = VMXNET3_OM_NONE;
gdesc->txd.msscof = 0;
+ deferred++;
}
/* flip the GEN bit on the SOP */
rte_compiler_barrier();
gdesc->dword[2] ^= VMXNET3_TXD_GEN;
- txq_ctrl->txNumDeferred = rte_cpu_to_le_32(++deferred);
+ txq_ctrl->txNumDeferred = rte_cpu_to_le_32(deferred);
nb_tx++;
}
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH v4 6/6] vmxnet3: announce device offload capability
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
` (4 preceding siblings ...)
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 5/6] vmxnet3: add TSO support Yong Wang
@ 2016-01-13 2:08 ` Yong Wang
2016-01-13 4:56 ` [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Stephen Hemminger
6 siblings, 0 replies; 10+ messages in thread
From: Yong Wang @ 2016-01-13 2:08 UTC (permalink / raw)
To: dev
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d90e62f..8a40127 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -693,7 +693,8 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
}
static void
-vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
{
dev_info->max_rx_queues = VMXNET3_MAX_RX_QUEUES;
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
@@ -716,6 +717,17 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
.nb_min = VMXNET3_DEF_TX_RING_SIZE,
.nb_align = 1,
};
+
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
+
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO;
}
/* return 0 means link status changed, -1 means not changed */
--
1.9.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups
2016-01-13 2:08 [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Yong Wang
` (5 preceding siblings ...)
2016-01-13 2:08 ` [dpdk-dev] [PATCH v4 6/6] vmxnet3: announce device offload capability Yong Wang
@ 2016-01-13 4:56 ` Stephen Hemminger
2016-02-10 12:30 ` Bruce Richardson
6 siblings, 1 reply; 10+ messages in thread
From: Stephen Hemminger @ 2016-01-13 4:56 UTC (permalink / raw)
To: Yong Wang; +Cc: dev
On Tue, 12 Jan 2016 18:08:31 -0800
Yong Wang <yongwang@vmware.com> wrote:
> v4:
> * moved cleanups to separate patches
> * correctly handled multi-seg pkts with data ring used
>
> v3:
> * fixed comments from Stephen
> * added performance number for tx data ring
>
> v2:
> * fixed some logging issues when debug option turned on
> * updated the txq_flags check in vmxnet3_dev_tx_queue_setup()
>
> This patchset adds TCP/UDP checksum offload and TSO to vmxnet3 PMD.
> One of the use cases is to support STT. It also restores the tx
> data ring feature that was removed from a previous patch.
>
> Yong Wang (6):
> vmxnet3: fix typos and remove unused struct
> vmxnet3: restore tx data ring support
> vmxnet3: cleanup txNumDeferred usage
> vmxnet3: add tx l4 cksum offload
> vmxnet3: add TSO support
> vmxnet3: announce device offload capability
>
> doc/guides/rel_notes/release_2_3.rst | 11 +++
> drivers/net/vmxnet3/base/includeCheck.h | 39 --------
> drivers/net/vmxnet3/base/vmxnet3_defs.h | 9 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 16 +++-
> drivers/net/vmxnet3/vmxnet3_ring.h | 13 ---
> drivers/net/vmxnet3/vmxnet3_rxtx.c | 160 +++++++++++++++++++++++++-------
> 6 files changed, 151 insertions(+), 97 deletions(-)
> delete mode 100644 drivers/net/vmxnet3/base/includeCheck.h
>
Looks good. The only thing maybe worth adding would be some more checks
int the vmxnet3_dev_configure for unsupported offload bits, etc.
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups
2016-01-13 4:56 ` [dpdk-dev] [PATCH v4 0/6] vmxnet3 TSO, tx cksum offload and cleanups Stephen Hemminger
@ 2016-02-10 12:30 ` Bruce Richardson
0 siblings, 0 replies; 10+ messages in thread
From: Bruce Richardson @ 2016-02-10 12:30 UTC (permalink / raw)
To: Stephen Hemminger, Yong Wang; +Cc: dev
On Tue, Jan 12, 2016 at 08:56:34PM -0800, Stephen Hemminger wrote:
> On Tue, 12 Jan 2016 18:08:31 -0800
> Yong Wang <yongwang@vmware.com> wrote:
>
> > v4:
> > * moved cleanups to separate patches
> > * correctly handled multi-seg pkts with data ring used
> >
> > v3:
> > * fixed comments from Stephen
> > * added performance number for tx data ring
> >
> > v2:
> > * fixed some logging issues when debug option turned on
> > * updated the txq_flags check in vmxnet3_dev_tx_queue_setup()
> >
> > This patchset adds TCP/UDP checksum offload and TSO to vmxnet3 PMD.
> > One of the use cases is to support STT. It also restores the tx
> > data ring feature that was removed from a previous patch.
> >
> > Yong Wang (6):
> > vmxnet3: fix typos and remove unused struct
> > vmxnet3: restore tx data ring support
> > vmxnet3: cleanup txNumDeferred usage
> > vmxnet3: add tx l4 cksum offload
> > vmxnet3: add TSO support
> > vmxnet3: announce device offload capability
> >
> > doc/guides/rel_notes/release_2_3.rst | 11 +++
> > drivers/net/vmxnet3/base/includeCheck.h | 39 --------
> > drivers/net/vmxnet3/base/vmxnet3_defs.h | 9 +-
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 16 +++-
> > drivers/net/vmxnet3/vmxnet3_ring.h | 13 ---
> > drivers/net/vmxnet3/vmxnet3_rxtx.c | 160 +++++++++++++++++++++++++-------
> > 6 files changed, 151 insertions(+), 97 deletions(-)
> > delete mode 100644 drivers/net/vmxnet3/base/includeCheck.h
> >
>
> Looks good. The only thing maybe worth adding would be some more checks
> int the vmxnet3_dev_configure for unsupported offload bits, etc.
>
> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Applied to dpdk-next-net/rel_16_04
Thanks,
Bruce
^ permalink raw reply [flat|nested] 10+ messages in thread