* [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload
@ 2016-01-05 2:28 Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support Yong Wang
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 2:28 UTC (permalink / raw)
To: dev
v2:
* fixed some logging issues when debug option turned on
* updated the txq_flags check in vmxnet3_dev_tx_queue_setup()
This patchset adds TCP/UDP checksum offload and TSO to vmxnet3 PMD.
One of the use cases for these features is to support STT. It also
restores the tx data ring feature that was removed from a previous
patch.
Yong Wang (4):
vmxnet3: restore tx data ring support
vmxnet3: add tx l4 cksum offload
vmxnet3: add TSO support
vmxnet3: announce device offload capability
doc/guides/rel_notes/release_2_3.rst | 11 +++
drivers/net/vmxnet3/vmxnet3_ethdev.c | 16 +++-
drivers/net/vmxnet3/vmxnet3_ring.h | 13 ---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 169 +++++++++++++++++++++++++++--------
4 files changed, 158 insertions(+), 51 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
@ 2016-01-05 2:28 ` Yong Wang
2016-01-05 5:16 ` Stephen Hemminger
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 2/4] vmxnet3: add tx l4 cksum offload Yong Wang
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Yong Wang @ 2016-01-05 2:28 UTC (permalink / raw)
To: dev
Tx data ring support was removed in a previous change
to add multi-seg transmit. This change adds it back.
Fixes: 7ba5de417e3c ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 5 +++++
drivers/net/vmxnet3/vmxnet3_rxtx.c | 17 ++++++++++++++++-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 99de186..a23c8ac 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -15,6 +15,11 @@ EAL
Drivers
~~~~~~~
+* **vmxnet3: restore tx data ring.**
+
+ Tx data ring has been shown to improve small pkt forwarding performance
+ on vSphere environment.
+
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 4de5d89..2202d31 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -348,6 +348,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t first2fill, avail, dw2;
struct rte_mbuf *txm = tx_pkts[nb_tx];
struct rte_mbuf *m_seg = txm;
+ int copy_size = 0;
/* Is this packet execessively fragmented, then drop */
if (unlikely(txm->nb_segs > VMXNET3_MAX_TXD_PER_PKT)) {
@@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
break;
}
+ if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
+ struct Vmxnet3_TxDataDesc *tdd;
+
+ tdd = txq->data_ring.base + txq->cmd_ring.next2fill;
+ copy_size = rte_pktmbuf_pkt_len(txm);
+ rte_memcpy(tdd->data, rte_pktmbuf_mtod(txm, char *), copy_size);
+ }
+
/* use the previous gen bit for the SOP desc */
dw2 = (txq->cmd_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
first2fill = txq->cmd_ring.next2fill;
@@ -377,7 +386,13 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
transmit buffer size (16K) is greater than
maximum sizeof mbuf segment size. */
gdesc = txq->cmd_ring.base + txq->cmd_ring.next2fill;
- gdesc->txd.addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
+ if (copy_size)
+ gdesc->txd.addr = rte_cpu_to_le_64(txq->data_ring.basePA +
+ txq->cmd_ring.next2fill *
+ sizeof(struct Vmxnet3_TxDataDesc));
+ else
+ gdesc->txd.addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
+
gdesc->dword[2] = dw2 | m_seg->data_len;
gdesc->dword[3] = 0;
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] vmxnet3: add tx l4 cksum offload
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support Yong Wang
@ 2016-01-05 2:28 ` Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support Yong Wang
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 2:28 UTC (permalink / raw)
To: dev
Support TCP/UDP checksum offload.
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 3 +++
drivers/net/vmxnet3/vmxnet3_rxtx.c | 39 +++++++++++++++++++++++++++---------
2 files changed, 33 insertions(+), 9 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index a23c8ac..58205fe 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -20,6 +20,9 @@ Drivers
Tx data ring has been shown to improve small pkt forwarding performance
on vSphere environment.
+* **vmxnet3: add tx l4 cksum offload.**
+
+ Support TCP/UDP checksum offload.
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 2202d31..08e6115 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -332,6 +332,8 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_tx;
vmxnet3_tx_queue_t *txq = tx_queue;
struct vmxnet3_hw *hw = txq->hw;
+ Vmxnet3_TxQueueCtrl *txq_ctrl = &txq->shared->ctrl;
+ uint32_t deferred = rte_le_to_cpu_32(txq_ctrl->txNumDeferred);
if (unlikely(txq->stopped)) {
PMD_TX_LOG(DEBUG, "Tx queue is stopped.");
@@ -413,21 +415,40 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
gdesc->txd.tci = txm->vlan_tci;
}
- /* TODO: Add transmit checksum offload here */
+ if (txm->ol_flags & PKT_TX_L4_MASK) {
+ gdesc->txd.om = VMXNET3_OM_CSUM;
+ gdesc->txd.hlen = txm->l2_len + txm->l3_len;
+
+ switch (txm->ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_TCP_CKSUM:
+ gdesc->txd.msscof = gdesc->txd.hlen + offsetof(struct tcp_hdr, cksum);
+ break;
+ case PKT_TX_UDP_CKSUM:
+ gdesc->txd.msscof = gdesc->txd.hlen + offsetof(struct udp_hdr, dgram_cksum);
+ break;
+ default:
+ PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
+ txm->ol_flags & PKT_TX_L4_MASK);
+ abort();
+ }
+ } else {
+ gdesc->txd.hlen = 0;
+ gdesc->txd.om = VMXNET3_OM_NONE;
+ gdesc->txd.msscof = 0;
+ }
+
+ txq_ctrl->txNumDeferred = rte_cpu_to_le_32(++deferred);
/* flip the GEN bit on the SOP */
rte_compiler_barrier();
gdesc->dword[2] ^= VMXNET3_TXD_GEN;
-
- txq->shared->ctrl.txNumDeferred++;
nb_tx++;
}
- PMD_TX_LOG(DEBUG, "vmxnet3 txThreshold: %u", txq->shared->ctrl.txThreshold);
-
- if (txq->shared->ctrl.txNumDeferred >= txq->shared->ctrl.txThreshold) {
+ PMD_TX_LOG(DEBUG, "vmxnet3 txThreshold: %u", rte_le_to_cpu_32(txq_ctrl->txThreshold));
- txq->shared->ctrl.txNumDeferred = 0;
+ if (deferred >= rte_le_to_cpu_32(txq_ctrl->txThreshold)) {
+ txq_ctrl->txNumDeferred = 0;
/* Notify vSwitch that packets are available. */
VMXNET3_WRITE_BAR0_REG(hw, (VMXNET3_REG_TXPROD + txq->queue_id * VMXNET3_REG_ALIGN),
txq->cmd_ring.next2fill);
@@ -728,8 +749,8 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
if ((tx_conf->txq_flags & ETH_TXQ_FLAGS_NOXSUMS) !=
- ETH_TXQ_FLAGS_NOXSUMS) {
- PMD_INIT_LOG(ERR, "TX no support for checksum offload yet");
+ ETH_TXQ_FLAGS_NOXSUMSCTP) {
+ PMD_INIT_LOG(ERR, "SCTP checksum offload not supported");
return -EINVAL;
}
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 2/4] vmxnet3: add tx l4 cksum offload Yong Wang
@ 2016-01-05 2:28 ` Yong Wang
2016-01-05 5:14 ` Stephen Hemminger
2016-01-05 5:15 ` Stephen Hemminger
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 4/4] vmxnet3: announce device offload capability Yong Wang
2016-01-05 5:22 ` [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Stephen Hemminger
4 siblings, 2 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 2:28 UTC (permalink / raw)
To: dev
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and non-tso
pkts can be successfully transmitted and all segmentes for a tso
pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
doc/guides/rel_notes/release_2_3.rst | 3 +
drivers/net/vmxnet3/vmxnet3_ring.h | 13 ----
drivers/net/vmxnet3/vmxnet3_rxtx.c | 117 ++++++++++++++++++++++++++---------
3 files changed, 92 insertions(+), 41 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_3.rst b/doc/guides/rel_notes/release_2_3.rst
index 58205fe..ae487bb 100644
--- a/doc/guides/rel_notes/release_2_3.rst
+++ b/doc/guides/rel_notes/release_2_3.rst
@@ -24,6 +24,9 @@ Drivers
Support TCP/UDP checksum offload.
+* **vmxnet3: add TSO support.**
+
+
Libraries
~~~~~~~~~
diff --git a/drivers/net/vmxnet3/vmxnet3_ring.h b/drivers/net/vmxnet3/vmxnet3_ring.h
index 612487e..15b19e1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ring.h
+++ b/drivers/net/vmxnet3/vmxnet3_ring.h
@@ -130,18 +130,6 @@ struct vmxnet3_txq_stats {
uint64_t tx_ring_full;
};
-typedef struct vmxnet3_tx_ctx {
- int ip_type;
- bool is_vlan;
- bool is_cso;
-
- uint16_t evl_tag; /* only valid when is_vlan == TRUE */
- uint32_t eth_hdr_size; /* only valid for pkts requesting tso or csum
- * offloading */
- uint32_t ip_hdr_size;
- uint32_t l4_hdr_size;
-} vmxnet3_tx_ctx_t;
-
typedef struct vmxnet3_tx_queue {
struct vmxnet3_hw *hw;
struct vmxnet3_cmd_ring cmd_ring;
@@ -155,7 +143,6 @@ typedef struct vmxnet3_tx_queue {
uint8_t port_id; /**< Device port identifier. */
} vmxnet3_tx_queue_t;
-
struct vmxnet3_rxq_stats {
uint64_t drop_total;
uint64_t drop_err;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 08e6115..1dd793e 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -295,27 +295,46 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev)
}
}
+static int
+vmxnet3_unmap_pkt(uint16_t eop_idx, vmxnet3_tx_queue_t *txq)
+{
+ int completed = 0;
+ struct rte_mbuf *mbuf;
+
+ /* Release cmd_ring descriptor and free mbuf */
+ VMXNET3_ASSERT(txq->cmd_ring.base[eop_idx].txd.eop == 1);
+
+ mbuf = txq->cmd_ring.buf_info[eop_idx].m;
+ if (unlikely(mbuf == NULL))
+ rte_panic("EOP desc does not point to a valid mbuf");
+ else
+ rte_pktmbuf_free(mbuf);
+
+ txq->cmd_ring.buf_info[eop_idx].m = NULL;
+
+ while (txq->cmd_ring.next2comp != eop_idx) {
+ /* no out-of-order completion */
+ VMXNET3_ASSERT(txq->cmd_ring.base[txq->cmd_ring.next2comp].txd.cq == 0);
+ vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
+ completed++;
+ }
+
+ /* Mark the txd for which tcd was generated as completed */
+ vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
+
+ return completed + 1;
+}
+
static void
vmxnet3_tq_tx_complete(vmxnet3_tx_queue_t *txq)
{
int completed = 0;
- struct rte_mbuf *mbuf;
vmxnet3_comp_ring_t *comp_ring = &txq->comp_ring;
struct Vmxnet3_TxCompDesc *tcd = (struct Vmxnet3_TxCompDesc *)
(comp_ring->base + comp_ring->next2proc);
while (tcd->gen == comp_ring->gen) {
- /* Release cmd_ring descriptor and free mbuf */
- VMXNET3_ASSERT(txq->cmd_ring.base[tcd->txdIdx].txd.eop == 1);
- while (txq->cmd_ring.next2comp != tcd->txdIdx) {
- mbuf = txq->cmd_ring.buf_info[txq->cmd_ring.next2comp].m;
- txq->cmd_ring.buf_info[txq->cmd_ring.next2comp].m = NULL;
- rte_pktmbuf_free_seg(mbuf);
-
- /* Mark the txd for which tcd was generated as completed */
- vmxnet3_cmd_ring_adv_next2comp(&txq->cmd_ring);
- completed++;
- }
+ completed += vmxnet3_unmap_pkt(tcd->txdIdx, txq);
vmxnet3_comp_ring_adv_next2proc(comp_ring);
tcd = (struct Vmxnet3_TxCompDesc *)(comp_ring->base +
@@ -325,6 +344,13 @@ vmxnet3_tq_tx_complete(vmxnet3_tx_queue_t *txq)
PMD_TX_LOG(DEBUG, "Processed %d tx comps & command descs.", completed);
}
+/* The number of descriptors that are needed for a packet. */
+static unsigned
+txd_estimate(const struct rte_mbuf *m)
+{
+ return m->nb_segs;
+}
+
uint16_t
vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
@@ -351,21 +377,42 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
struct rte_mbuf *txm = tx_pkts[nb_tx];
struct rte_mbuf *m_seg = txm;
int copy_size = 0;
+ bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+ unsigned count = txd_estimate(txm);
+
+ avail = vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring);
+ if (count > avail) {
+ /* Is command ring full? */
+ if (unlikely(avail == 0)) {
+ PMD_TX_LOG(DEBUG, "No free ring descriptors");
+ txq->stats.tx_ring_full++;
+ txq->stats.drop_total += (nb_pkts - nb_tx);
+ break;
+ }
- /* Is this packet execessively fragmented, then drop */
- if (unlikely(txm->nb_segs > VMXNET3_MAX_TXD_PER_PKT)) {
- ++txq->stats.drop_too_many_segs;
- ++txq->stats.drop_total;
+ /* Command ring is not full but cannot handle the
+ * multi-segmented packet. Let's try the next packet
+ * in this case.
+ */
+ PMD_TX_LOG(DEBUG, "Running out of ring descriptors "
+ "(avail %d needed %d)\n", avail, count);
+ txq->stats.drop_total++;
+ if (tso)
+ txq->stats.drop_tso++;
rte_pktmbuf_free(txm);
- ++nb_tx;
+ nb_tx++;
continue;
}
- /* Is command ring full? */
- avail = vmxnet3_cmd_ring_desc_avail(&txq->cmd_ring);
- if (txm->nb_segs > avail) {
- ++txq->stats.tx_ring_full;
- break;
+ /* Drop non-TSO packet that is excessively fragmented */
+ if (unlikely(!tso && count > VMXNET3_MAX_TXD_PER_PKT)) {
+ PMD_TX_LOG(ERROR, "Non-TSO packet cannot occupy more than %d tx "
+ "descriptors. Packet dropped.\n", VMXNET3_MAX_TXD_PER_PKT);
+ txq->stats.drop_too_many_segs++;
+ txq->stats.drop_total++;
+ rte_pktmbuf_free(txm);
+ nb_tx++;
+ continue;
}
if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
@@ -382,11 +429,11 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
do {
/* Remember the transmit buffer for cleanup */
tbi = txq->cmd_ring.buf_info + txq->cmd_ring.next2fill;
- tbi->m = m_seg;
/* NB: the following assumes that VMXNET3 maximum
- transmit buffer size (16K) is greater than
- maximum sizeof mbuf segment size. */
+ * transmit buffer size (16K) is greater than
+ * maximum size of mbuf segment size.
+ */
gdesc = txq->cmd_ring.base + txq->cmd_ring.next2fill;
if (copy_size)
gdesc->txd.addr = rte_cpu_to_le_64(txq->data_ring.basePA +
@@ -405,6 +452,8 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
dw2 = txq->cmd_ring.gen << VMXNET3_TXD_GEN_SHIFT;
} while ((m_seg = m_seg->next) != NULL);
+ /* set the last buf_info for the pkt */
+ tbi->m = txm;
/* Update the EOP descriptor */
gdesc->dword[3] |= VMXNET3_TXD_EOP | VMXNET3_TXD_CQ;
@@ -415,7 +464,17 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
gdesc->txd.tci = txm->vlan_tci;
}
- if (txm->ol_flags & PKT_TX_L4_MASK) {
+ if (tso) {
+ uint16_t mss = txm->tso_segsz;
+
+ VMXNET3_ASSERT(mss > 0);
+
+ gdesc->txd.hlen = txm->l2_len + txm->l3_len + txm->l4_len;
+ gdesc->txd.om = VMXNET3_OM_TSO;
+ gdesc->txd.msscof = mss;
+
+ deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
+ } else if (txm->ol_flags & PKT_TX_L4_MASK) {
gdesc->txd.om = VMXNET3_OM_CSUM;
gdesc->txd.hlen = txm->l2_len + txm->l3_len;
@@ -431,13 +490,15 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
txm->ol_flags & PKT_TX_L4_MASK);
abort();
}
+ deferred++;
} else {
gdesc->txd.hlen = 0;
gdesc->txd.om = VMXNET3_OM_NONE;
gdesc->txd.msscof = 0;
+ deferred++;
}
- txq_ctrl->txNumDeferred = rte_cpu_to_le_32(++deferred);
+ txq_ctrl->txNumDeferred = rte_cpu_to_le_32(deferred);
/* flip the GEN bit on the SOP */
rte_compiler_barrier();
@@ -634,7 +695,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(rxd->btype != VMXNET3_RXD_BTYPE_HEAD)) {
PMD_RX_LOG(DEBUG,
"Alert : Misbehaving device, incorrect "
- " buffer type used. iPacket dropped.");
+ " buffer type used. Packet dropped.");
rte_pktmbuf_free_seg(rbi->m);
goto rcd_done;
}
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] vmxnet3: announce device offload capability
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
` (2 preceding siblings ...)
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support Yong Wang
@ 2016-01-05 2:28 ` Yong Wang
2016-01-05 5:22 ` [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Stephen Hemminger
4 siblings, 0 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 2:28 UTC (permalink / raw)
To: dev
Signed-off-by: Yong Wang <yongwang@vmware.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index c363bf6..8a40127 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -693,7 +693,8 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
}
static void
-vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
{
dev_info->max_rx_queues = VMXNET3_MAX_RX_QUEUES;
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
@@ -716,6 +717,17 @@ vmxnet3_dev_info_get(__attribute__((unused))struct rte_eth_dev *dev, struct rte_
.nb_min = VMXNET3_DEF_TX_RING_SIZE,
.nb_align = 1,
};
+
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
+
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO;
}
/* return 0 means link status changed, -1 means not changed */
@@ -819,7 +831,7 @@ vmxnet3_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vid, int on)
else
VMXNET3_CLEAR_VFTABLE_ENTRY(hw->shadow_vfta, vid);
- /* don't change active filter if in promiscious mode */
+ /* don't change active filter if in promiscuous mode */
if (rxConf->rxMode & VMXNET3_RXM_PROMISC)
return 0;
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support Yong Wang
@ 2016-01-05 5:14 ` Stephen Hemminger
2016-01-05 23:45 ` Yong Wang
2016-01-05 5:15 ` Stephen Hemminger
1 sibling, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2016-01-05 5:14 UTC (permalink / raw)
To: Yong Wang; +Cc: dev
On Mon, 4 Jan 2016 18:28:18 -0800
Yong Wang <yongwang@vmware.com> wrote:
> + mbuf = txq->cmd_ring.buf_info[eop_idx].m;
> + if (unlikely(mbuf == NULL))
> + rte_panic("EOP desc does not point to a valid mbuf");
> + else
The unlikely is really not needed with rte_panic since it is declared
with cold attribute which has same effect.
Else is unnecessary because rte_panic never returns.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support Yong Wang
2016-01-05 5:14 ` Stephen Hemminger
@ 2016-01-05 5:15 ` Stephen Hemminger
2016-01-05 23:45 ` Yong Wang
1 sibling, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2016-01-05 5:15 UTC (permalink / raw)
To: Yong Wang; +Cc: dev
On Mon, 4 Jan 2016 18:28:18 -0800
Yong Wang <yongwang@vmware.com> wrote:
> +/* The number of descriptors that are needed for a packet. */
> +static unsigned
> +txd_estimate(const struct rte_mbuf *m)
> +{
> + return m->nb_segs;
> +}
> +
A wrapper function only really clarifies if it is hiding some information.
Why not just code this in place?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support Yong Wang
@ 2016-01-05 5:16 ` Stephen Hemminger
2016-01-05 23:27 ` Yong Wang
0 siblings, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2016-01-05 5:16 UTC (permalink / raw)
To: Yong Wang; +Cc: dev
On Mon, 4 Jan 2016 18:28:16 -0800
Yong Wang <yongwang@vmware.com> wrote:
> Tx data ring support was removed in a previous change
> to add multi-seg transmit. This change adds it back.
>
> Fixes: 7ba5de417e3c ("vmxnet3: support multi-segment transmit")
>
> Signed-off-by: Yong Wang <yongwang@vmware.com>
Do you have any numbers to confirm this?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
` (3 preceding siblings ...)
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 4/4] vmxnet3: announce device offload capability Yong Wang
@ 2016-01-05 5:22 ` Stephen Hemminger
4 siblings, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2016-01-05 5:22 UTC (permalink / raw)
To: Yong Wang; +Cc: dev
On Mon, 4 Jan 2016 18:28:15 -0800
Yong Wang <yongwang@vmware.com> wrote:
> v2:
> * fixed some logging issues when debug option turned on
> * updated the txq_flags check in vmxnet3_dev_tx_queue_setup()
>
> This patchset adds TCP/UDP checksum offload and TSO to vmxnet3 PMD.
> One of the use cases for these features is to support STT. It also
> restores the tx data ring feature that was removed from a previous
> patch.
>
> Yong Wang (4):
> vmxnet3: restore tx data ring support
> vmxnet3: add tx l4 cksum offload
> vmxnet3: add TSO support
> vmxnet3: announce device offload capability
>
> doc/guides/rel_notes/release_2_3.rst | 11 +++
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 16 +++-
> drivers/net/vmxnet3/vmxnet3_ring.h | 13 ---
> drivers/net/vmxnet3/vmxnet3_rxtx.c | 169 +++++++++++++++++++++++++++--------
> 4 files changed, 158 insertions(+), 51 deletions(-)
>
Overall, this looks good.
I hope STT would die (but unfortunately it won't).
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support
2016-01-05 5:16 ` Stephen Hemminger
@ 2016-01-05 23:27 ` Yong Wang
0 siblings, 0 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 23:27 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On 1/4/16, 9:16 PM, "Stephen Hemminger" <stephen@networkplumber.org> wrote:
>On Mon, 4 Jan 2016 18:28:16 -0800
>Yong Wang <yongwang@vmware.com> wrote:
>
>> Tx data ring support was removed in a previous change
>> to add multi-seg transmit. This change adds it back.
>>
>> Fixes: 7ba5de417e3c ("vmxnet3: support multi-segment transmit")
>>
>> Signed-off-by: Yong Wang <yongwang@vmware.com>
>
>Do you have any numbers to confirm this?
From the original commit (2e849373):
Performance results show that this patch significantly
boosts vmxnet3 64B tx performance (pkt rate) for l2fwd
application on a Ivy Bridge server by >20% at which
point we start to hit some bottleneck on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master with
this set of patches and still observed ~17% performance
gains.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support
2016-01-05 5:14 ` Stephen Hemminger
@ 2016-01-05 23:45 ` Yong Wang
0 siblings, 0 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 23:45 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On 1/4/16, 9:14 PM, "Stephen Hemminger" <stephen@networkplumber.org> wrote:
>On Mon, 4 Jan 2016 18:28:18 -0800
>Yong Wang <yongwang@vmware.com> wrote:
>
>> + mbuf = txq->cmd_ring.buf_info[eop_idx].m;
>> + if (unlikely(mbuf == NULL))
>> + rte_panic("EOP desc does not point to a valid mbuf");
>> + else
>
>The unlikely is really not needed with rte_panic since it is declared
>with cold attribute which has same effect.
>
>Else is unnecessary because rte_panic never returns.
Done.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support
2016-01-05 5:15 ` Stephen Hemminger
@ 2016-01-05 23:45 ` Yong Wang
0 siblings, 0 replies; 12+ messages in thread
From: Yong Wang @ 2016-01-05 23:45 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On 1/4/16, 9:15 PM, "Stephen Hemminger" <stephen@networkplumber.org> wrote:
>On Mon, 4 Jan 2016 18:28:18 -0800
>Yong Wang <yongwang@vmware.com> wrote:
>
>> +/* The number of descriptors that are needed for a packet. */
>> +static unsigned
>> +txd_estimate(const struct rte_mbuf *m)
>> +{
>> + return m->nb_segs;
>> +}
>> +
>
>A wrapper function only really clarifies if it is hiding some information.
>Why not just code this in place?
Sure and removed.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-01-05 23:45 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-05 2:28 [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 1/4] vmxnet3: restore tx data ring support Yong Wang
2016-01-05 5:16 ` Stephen Hemminger
2016-01-05 23:27 ` Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 2/4] vmxnet3: add tx l4 cksum offload Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 3/4] vmxnet3: add TSO support Yong Wang
2016-01-05 5:14 ` Stephen Hemminger
2016-01-05 23:45 ` Yong Wang
2016-01-05 5:15 ` Stephen Hemminger
2016-01-05 23:45 ` Yong Wang
2016-01-05 2:28 ` [dpdk-dev] [PATCH v2 4/4] vmxnet3: announce device offload capability Yong Wang
2016-01-05 5:22 ` [dpdk-dev] [PATCH v2 0/4] vmxnet3 TSO and tx cksum offload Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).